You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Jay L. <jl...@sl...> - 2019-09-29 20:29:49
|
Hi, I have a small MooseFS cluster running on four identical nodes. Everything was running smoothly until a week ago when one of the nodes started showing a value under "Last Error." The "Last Error" field updates every couple of days. The status is still shown as "Ok" for the drive. I have run scans on the hard drive on the "Last Error" node, and they passed without issues. I don't see any issues in the SMART data either. What exactly is going on and what exactly does a value in 'Last Error" tell me? Can someone advise on what else I should check on? Thank you, Jay |
From: Wilson, S. M <st...@pu...> - 2019-09-27 14:50:43
|
Bill, You shouldn't run a metalogger on the same server as the master. I normally have two to four chunkservers, one of which will also double as the master and another will run the metalogger service. So, in answer to question #2, I would recommend not running the metalogger on your low-powered systems and only on one or two of the i7's. My normal procedure for changing to a new master is this (NM is new master, OM is old master): a. stop the master on OM b. stop the metalogger on NM c. copy /var/lib/mfs/metadata* from OM to NM d. drop hostname alias from OM e. add hostname alias to NM f. start up master on NM g. start up metalogger on OM There are other details, of course, but that covers the main points. Steve ________________________________ From: William Kenworthy <wil...@gm...> Sent: Friday, September 27, 2019 5:36 AM To: moo...@li... <moo...@li...> Subject: [MooseFS-Users] New user questions Hi, I have set up a small CE moosefs system and would like to clarify some parts of the documentation: 1. on the master, *should* you run a metalogger - sounds like its unnecessary? Does the answer to this change if the master has a chunkserver attached as well? If I have enough hardware to run separate master, metalogger and chunkservers - is this the most desirable setup? 2. Is it good practise to have a metalogger on every chunkserver, or just one or two? - I assume there could be network problems with too many, but how many metaloggers to chunkservers is adequate? I am running some storage on low powered odroid-HC2's and others on i7 hardware so would like to understand if running on the odroids is not really necessary even though its possible. 3. Changing to a new master (e.g., hardware maintenance etc, so a planned event): is it sufficient to: a. bring down the current master b. change DNS to point to the new master and refresh c. on another system with a normally running metalogger, bring up the new master with -a d. on the original master system spin up a new metalogger * what other intervention would be necessary? * assume its all been reconfigured The documentation is good, but unclear on the above as its a generic document that covers small and large systems in Pro and CE varieties. BillK |
From: William K. <wil...@gm...> - 2019-09-27 09:36:32
|
Hi, I have set up a small CE moosefs system and would like to clarify some parts of the documentation: 1. on the master, **should** you run a metalogger - sounds like its unnecessary? Does the answer to this change if the master has a chunkserver attached as well? If I have enough hardware to run separate master, metalogger and chunkservers - is this the most desirable setup? 2. Is it good practise to have a metalogger on every chunkserver, or just one or two? - I assume there could be network problems with too many, but how many metaloggers to chunkservers is adequate? I am running some storage on low powered odroid-HC2's and others on i7 hardware so would like to understand if running on the odroids is not really necessary even though its possible. 3. Changing to a new master (e.g., hardware maintenance etc, so a planned event): is it sufficient to: a. bring down the current master b. change DNS to point to the new master and refresh c. on another system with a normally running metalogger, bring up the new master with -a d. on the original master system spin up a new metalogger * what other intervention would be necessary? * assume its all been reconfigured The documentation is good, but unclear on the above as its a generic document that covers small and large systems in Pro and CE varieties. BillK |
From: David M. <dav...@pr...> - 2019-09-13 14:38:10
|
Hi, I'd like to switch my master to a new server as gracefully as possible. MFS supports multiple masters, so I believe I could set up the new master as a second master, integrate it into the cluster, then switch off the old master. - Besides updating all the MFS configs, iptables, etc on the chunkservers, logger and old master, is there anything else I should do? - What are the main problems that could arise during the new master initialization that I should monitor? - When would I know it's safe to turn off the old master? Thanks in advance for any advice. David Sent with [ProtonMail](https://protonmail.com) Secure Email. |
From: Piotr R. K. <pio...@mo...> - 2019-07-18 12:59:11
|
Hello Alexander, sure, we will do. Sorry for the delay and thanks for the reminder. Best regards, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 18 Jul 2019, at 10:12 AM, Alexander AKHOBADZE <ba...@ya...> wrote: > > Hi dear developers! > > Let me ask will you make a FreeBSD port for current 3.0.105 MooseFS version? > > WBR > Alexander > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <ba...@ya...> - 2019-07-18 08:12:26
|
Hi dear developers! Let me ask will you make a FreeBSD port for current 3.0.105 MooseFS version? WBR Alexander |
From: Jay L. <jl...@sl...> - 2019-04-18 14:46:33
|
Thank you for the reply. I am starting small and so will only initially have four drives each directly attached to their own compute instance. Would SSDs reduce power consumption? My current plan is spinning disk, but I would consider SSDs if the power cost savings was significantly less. My current research suggests that the savings might not be significant and so I am leaning towards HDDs since they are cheaper. Jay On Thu, Apr 18, 2019 at 9:57 AM Aleksander Wieliczko < ale...@mo...> wrote: > Hi Jay, > > I would like to inform you that at the moment we don't have such a feature. > Basically, MooseFS never sleeps. It means that even if there are no IO > operations all chunks are slowly migrated in the background to avoid > silent data corruption. > > By the way, how many spinning disks you have? Personally, I believe that > you should use some kind of IPMI and schedule the whole cluster power off. > -- > Best regards, > Alex > > Aleksander Wieliczko > System Engineer > MooseFS Development & Support Team | moosefs.co > > On 18.04.2019 14:33, Jay Livens wrote: > > Hi, > > > > I am considering implementing MooseFS in my house and am thinking about > > power usage. As you can imagine, the access profile will be high at > > certain times and low at others. (Low at night, typically.) > > > > Does MooseFS include any drive power management? It seems like a great > > feature would be to spin down chunkserver drives either on a schedule or > > based on inactivity. However, I cannot find any information about this. > > > > Does MooseFS including any power management like this? > > > > Thank you, > > > > Jay > > > > > > _________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Aleksander W. <ale...@mo...> - 2019-04-18 13:57:11
|
Hi Jay, I would like to inform you that at the moment we don't have such a feature. Basically, MooseFS never sleeps. It means that even if there are no IO operations all chunks are slowly migrated in the background to avoid silent data corruption. By the way, how many spinning disks you have? Personally, I believe that you should use some kind of IPMI and schedule the whole cluster power off. -- Best regards, Alex Aleksander Wieliczko System Engineer MooseFS Development & Support Team | moosefs.co On 18.04.2019 14:33, Jay Livens wrote: > Hi, > > I am considering implementing MooseFS in my house and am thinking about > power usage. As you can imagine, the access profile will be high at > certain times and low at others. (Low at night, typically.) > > Does MooseFS include any drive power management? It seems like a great > feature would be to spin down chunkserver drives either on a schedule or > based on inactivity. However, I cannot find any information about this. > > Does MooseFS including any power management like this? > > Thank you, > > Jay > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Alexander A. <ba...@ya...> - 2019-04-18 13:02:01
|
Hi! The first solution to minimize size of MooseFS instance is to split one big Moose into two or more smaller. In this case you have two or more separate master server serving separate data subset. Nobody prevents you from simultaneously mounting directories from many Moose clusters on one host. Also you can launch more then one chunk server on one host. The second solution is to not to place on Moose tons of small files by itself. Make one big file, format it in EXT4 or XFS and mount this big file on any client host requiring this data. It saves/preserves big amount of master's RAM. On the other hand - files can not be shared between hosts. wbr Aleaxander On 17.04.2019 15:20, Wilson, Steven M wrote: > > Yes, that's a lot of files! I've been working with the users to try > to figure out ways to reduce the number of files but that's a slow > process. > > > I have 256GB of memory in the master server. mfsmaster has 121GB > allocated from when it reached the peak but "only" 81GB in use right > now. It takes 2.5m to write out the metadata from memory but I only > do that every 6 hours. I even have /var/lib/mfs on an NVMe SSD to try > to make that as fast as possible. The master server has a Xeon > E5-1630v3 running at 3.7GHz. > > > If anyone else has some good ideas on how to improve performance in > general for file systems with 100's of millions of files, I'd like to > hear them! > > > Steve > > > > ------------------------------------------------------------------------ > *From:* Alexander AKHOBADZE <ba...@ya...> > *Sent:* Wednesday, April 17, 2019 2:23 AM > *To:* moo...@li... > *Subject:* Re: [MooseFS-Users] Long and severe performance impact of > deleting files > > > Hi! > > WOW!!! 300-400 millions!!! This is a huge workload IMHO. > > Let me ask how much RAM is installed in your master server and how > much of it your Master-process uses to serve so many files? > > How much time Master spends to hourly save the whole metadata? > > > On 16.04.2019 23:04, Wilson, Steven M wrote: >> >> Hi, >> >> >> One of our MooseFS file systems has four chunkservers and just over >> 300 million files. A few weeks ago, the number of files had >> increased to almost 400 million and then a user deleted close to 100 >> million files at one time. That dramatically impacted performance on >> the file system and it took about four weeks for the file system to >> return to its normal level of performance. Users were reporting that >> their I/O intensive jobs were taking about 3 times longer to >> complete. And they were also complaining that their active desktop >> sessions were very sluggish and almost unusable at times. >> >> >> Our chunkservers are running 3.0.103 except for one which is still at >> 3.0.97 (soon to be upgraded). The underlying file system is XFS for >> most, but not all, of the disks in each chunkserver (we have a few >> ZFS and a few ext4). We have a goal of 2 for every file in the file >> system. The chunk servers are about 95% full. >> >> >> The chunk deletions per minute graph shows it starting on March 20 >> with about 18k deletions per minute. By March 25 it is steady at 12K >> deletions per minute. Then around March 31 it drops to 8K. By April >> 4 we are at 5K and by April 11 it dropped to 2.5K. And finally by >> yesterday, May 15, we are averaging 1.5K deletions per minute and our >> performance has returned to almost normal. >> >> >> During this time the disk utilization (as seen from iostat) on the >> disks in the chunkservers were between 60% and 100%. Now we're down >> to a more reasonable 50% utilization or less. >> >> >> Is this an inherent issue with MooseFS or are there ways to lessen >> the severe performance impact of deleting large numbers of files? >> Has anyone else expierienced this behavior? I assume there must be >> something going on in the background for coalescing free space, etc. >> but it sure seems to come with a steep penalty. >> >> >> Thanks, >> >> >> Steve >> >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Jay L. <jl...@sl...> - 2019-04-18 13:00:34
|
Hi, I am considering implementing MooseFS in my house and am thinking about power usage. As you can imagine, the access profile will be high at certain times and low at others. (Low at night, typically.) Does MooseFS include any drive power management? It seems like a great feature would be to spin down chunkserver drives either on a schedule or based on inactivity. However, I cannot find any information about this. Does MooseFS including any power management like this? Thank you, Jay |
From: Wilson, S. M <st...@pu...> - 2019-04-17 12:35:08
|
Yes, that's a lot of files! I've been working with the users to try to figure out ways to reduce the number of files but that's a slow process. I have 256GB of memory in the master server. mfsmaster has 121GB allocated from when it reached the peak but "only" 81GB in use right now. It takes 2.5m to write out the metadata from memory but I only do that every 6 hours. I even have /var/lib/mfs on an NVMe SSD to try to make that as fast as possible. The master server has a Xeon E5-1630v3 running at 3.7GHz. If anyone else has some good ideas on how to improve performance in general for file systems with 100's of millions of files, I'd like to hear them! Steve ________________________________ From: Alexander AKHOBADZE <ba...@ya...> Sent: Wednesday, April 17, 2019 2:23 AM To: moo...@li... Subject: Re: [MooseFS-Users] Long and severe performance impact of deleting files Hi! WOW!!! 300-400 millions!!! This is a huge workload IMHO. Let me ask how much RAM is installed in your master server and how much of it your Master-process uses to serve so many files? How much time Master spends to hourly save the whole metadata? On 16.04.2019 23:04, Wilson, Steven M wrote: Hi, One of our MooseFS file systems has four chunkservers and just over 300 million files. A few weeks ago, the number of files had increased to almost 400 million and then a user deleted close to 100 million files at one time. That dramatically impacted performance on the file system and it took about four weeks for the file system to return to its normal level of performance. Users were reporting that their I/O intensive jobs were taking about 3 times longer to complete. And they were also complaining that their active desktop sessions were very sluggish and almost unusable at times. Our chunkservers are running 3.0.103 except for one which is still at 3.0.97 (soon to be upgraded). The underlying file system is XFS for most, but not all, of the disks in each chunkserver (we have a few ZFS and a few ext4). We have a goal of 2 for every file in the file system. The chunk servers are about 95% full. The chunk deletions per minute graph shows it starting on March 20 with about 18k deletions per minute. By March 25 it is steady at 12K deletions per minute. Then around March 31 it drops to 8K. By April 4 we are at 5K and by April 11 it dropped to 2.5K. And finally by yesterday, May 15, we are averaging 1.5K deletions per minute and our performance has returned to almost normal. During this time the disk utilization (as seen from iostat) on the disks in the chunkservers were between 60% and 100%. Now we're down to a more reasonable 50% utilization or less. Is this an inherent issue with MooseFS or are there ways to lessen the severe performance impact of deleting large numbers of files? Has anyone else expierienced this behavior? I assume there must be something going on in the background for coalescing free space, etc. but it sure seems to come with a steep penalty. Thanks, Steve _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wilson, S. M <st...@pu...> - 2019-04-17 12:22:16
|
________________________________ From: Ricardo J. Barberis <ric...@do...> Sent: Tuesday, April 16, 2019 7:10 PM To: moo...@li... Subject: Re: [MooseFS-Users] Long and severe performance impact of deleting files El Martes 16/04/2019 a las 17:04, Wilson, Steven M escribió: > Hi, > > > One of our MooseFS file systems has four chunkservers and just over 300 > million files. A few weeks ago, the number of files had increased to > almost 400 million and then a user deleted close to 100 million files at > one time. That dramatically impacted performance on the file system and it > took about four weeks for the file system to return to its normal level of > performance. Users were reporting that their I/O intensive jobs were > taking about 3 times longer to complete. And they were also complaining > that their active desktop sessions were very sluggish and almost unusable > at times. > > > Our chunkservers are running 3.0.103 except for one which is still at > 3.0.97 (soon to be upgraded). The underlying file system is XFS for most, > but not all, of the disks in each chunkserver (we have a few ZFS and a few > ext4). We have a goal of 2 for every file in the file system. The chunk > servers are about 95% full. > > > The chunk deletions per minute graph shows it starting on March 20 with > about 18k deletions per minute. By March 25 it is steady at 12K deletions > per minute. Then around March 31 it drops to 8K. By April 4 we are at 5K > and by April 11 it dropped to 2.5K. And finally by yesterday, May 15, we > are averaging 1.5K deletions per minute and our performance has returned to > almost normal. > > > During this time the disk utilization (as seen from iostat) on the disks in > the chunkservers were between 60% and 100%. Now we're down to a more > reasonable 50% utilization or less. > > > Is this an inherent issue with MooseFS or are there ways to lessen the > severe performance impact of deleting large numbers of files? Has anyone > else expierienced this behavior? I assume there must be something going on > in the background for coalescing free space, etc. but it sure seems to come > with a steep penalty. > > > Thanks, > > Steve My case is not as severe, but in general we are affected by deletions and replications. We mitigate this with a script that, during the day sets on the masters these variables: CHUNKS_SOFT_DEL_LIMIT = 1 CHUNKS_HARD_DEL_LIMIT = 1 CHUNKS_WRITE_REP_LIMIT = 1 CHUNKS_READ_REP_LIMIT = 1 And at night it sets them so: CHUNKS_SOFT_DEL_LIMIT = 10 CHUNKS_HARD_DEL_LIMIT = 25 CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 10 That helps us a lot, especially drugin peak hours. Ocasionally we tweak those values by hand, e.g. in case of deleting a lot of files, we might increase CHUNKS_SOFT_DEL_LIMIT and CHUNKS_HARD_DEL_LIMIT and lower the other two. After tweaking those parameters you just need to reload mfsmater. HTH, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com<http://www.DonWeb.com> Home WebHosting Estados Unidos Ilimitado - Registro de Dominios - Revendedores - VPS<http://www.donweb.com/> www.donweb.com DonWeb Web Hosting y Registro de Dominios. Alojamiento para sitios desde u$s 2,27, Registro de dominios, Hosting para revendedores, Servidores Dedicados, Planes VPS (Virtual Private Servers). Somos proveedor lider en LATAM (Fuente Netcraft) ________________________________ Hi, Ricardo, Thanks for sharing that. After reading your reply, I realized that I had seen that mentioned before (probably by you!) on this list. I'll probably do something similar here. Steve |
From: Alexander A. <ba...@ya...> - 2019-04-17 06:23:20
|
Hi! WOW!!! 300-400 millions!!! This is a huge workload IMHO. Let me ask how much RAM is installed in your master server and how much of it your Master-process uses to serve so many files? How much time Master spends to hourly save the whole metadata? On 16.04.2019 23:04, Wilson, Steven M wrote: > > Hi, > > > One of our MooseFS file systems has four chunkservers and just over > 300 million files. A few weeks ago, the number of files had increased > to almost 400 million and then a user deleted close to 100 million > files at one time. That dramatically impacted performance on the file > system and it took about four weeks for the file system to return to > its normal level of performance. Users were reporting that their I/O > intensive jobs were taking about 3 times longer to complete. And they > were also complaining that their active desktop sessions were very > sluggish and almost unusable at times. > > > Our chunkservers are running 3.0.103 except for one which is still at > 3.0.97 (soon to be upgraded). The underlying file system is XFS for > most, but not all, of the disks in each chunkserver (we have a few ZFS > and a few ext4). We have a goal of 2 for every file in the file > system. The chunk servers are about 95% full. > > > The chunk deletions per minute graph shows it starting on March 20 > with about 18k deletions per minute. By March 25 it is steady at 12K > deletions per minute. Then around March 31 it drops to 8K. By April 4 > we are at 5K and by April 11 it dropped to 2.5K. And finally by > yesterday, May 15, we are averaging 1.5K deletions per minute and our > performance has returned to almost normal. > > > During this time the disk utilization (as seen from iostat) on the > disks in the chunkservers were between 60% and 100%. Now we're down > to a more reasonable 50% utilization or less. > > > Is this an inherent issue with MooseFS or are there ways to lessen the > severe performance impact of deleting large numbers of files? Has > anyone else expierienced this behavior? I assume there must be > something going on in the background for coalescing free space, etc. > but it sure seems to come with a steep penalty. > > > Thanks, > > > Steve > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@do...> - 2019-04-16 23:30:14
|
El Martes 16/04/2019 a las 17:04, Wilson, Steven M escribió: > Hi, > > > One of our MooseFS file systems has four chunkservers and just over 300 > million files. A few weeks ago, the number of files had increased to > almost 400 million and then a user deleted close to 100 million files at > one time. That dramatically impacted performance on the file system and it > took about four weeks for the file system to return to its normal level of > performance. Users were reporting that their I/O intensive jobs were > taking about 3 times longer to complete. And they were also complaining > that their active desktop sessions were very sluggish and almost unusable > at times. > > > Our chunkservers are running 3.0.103 except for one which is still at > 3.0.97 (soon to be upgraded). The underlying file system is XFS for most, > but not all, of the disks in each chunkserver (we have a few ZFS and a few > ext4). We have a goal of 2 for every file in the file system. The chunk > servers are about 95% full. > > > The chunk deletions per minute graph shows it starting on March 20 with > about 18k deletions per minute. By March 25 it is steady at 12K deletions > per minute. Then around March 31 it drops to 8K. By April 4 we are at 5K > and by April 11 it dropped to 2.5K. And finally by yesterday, May 15, we > are averaging 1.5K deletions per minute and our performance has returned to > almost normal. > > > During this time the disk utilization (as seen from iostat) on the disks in > the chunkservers were between 60% and 100%. Now we're down to a more > reasonable 50% utilization or less. > > > Is this an inherent issue with MooseFS or are there ways to lessen the > severe performance impact of deleting large numbers of files? Has anyone > else expierienced this behavior? I assume there must be something going on > in the background for coalescing free space, etc. but it sure seems to come > with a steep penalty. > > > Thanks, > > Steve My case is not as severe, but in general we are affected by deletions and replications. We mitigate this with a script that, during the day sets on the masters these variables: CHUNKS_SOFT_DEL_LIMIT = 1 CHUNKS_HARD_DEL_LIMIT = 1 CHUNKS_WRITE_REP_LIMIT = 1 CHUNKS_READ_REP_LIMIT = 1 And at night it sets them so: CHUNKS_SOFT_DEL_LIMIT = 10 CHUNKS_HARD_DEL_LIMIT = 25 CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 10 That helps us a lot, especially drugin peak hours. Ocasionally we tweak those values by hand, e.g. in case of deleting a lot of files, we might increase CHUNKS_SOFT_DEL_LIMIT and CHUNKS_HARD_DEL_LIMIT and lower the other two. After tweaking those parameters you just need to reload mfsmater. HTH, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ |
From: Wilson, S. M <st...@pu...> - 2019-04-16 20:19:30
|
Hi, One of our MooseFS file systems has four chunkservers and just over 300 million files. A few weeks ago, the number of files had increased to almost 400 million and then a user deleted close to 100 million files at one time. That dramatically impacted performance on the file system and it took about four weeks for the file system to return to its normal level of performance. Users were reporting that their I/O intensive jobs were taking about 3 times longer to complete. And they were also complaining that their active desktop sessions were very sluggish and almost unusable at times. Our chunkservers are running 3.0.103 except for one which is still at 3.0.97 (soon to be upgraded). The underlying file system is XFS for most, but not all, of the disks in each chunkserver (we have a few ZFS and a few ext4). We have a goal of 2 for every file in the file system. The chunk servers are about 95% full. The chunk deletions per minute graph shows it starting on March 20 with about 18k deletions per minute. By March 25 it is steady at 12K deletions per minute. Then around March 31 it drops to 8K. By April 4 we are at 5K and by April 11 it dropped to 2.5K. And finally by yesterday, May 15, we are averaging 1.5K deletions per minute and our performance has returned to almost normal. During this time the disk utilization (as seen from iostat) on the disks in the chunkservers were between 60% and 100%. Now we're down to a more reasonable 50% utilization or less. Is this an inherent issue with MooseFS or are there ways to lessen the severe performance impact of deleting large numbers of files? Has anyone else expierienced this behavior? I assume there must be something going on in the background for coalescing free space, etc. but it sure seems to come with a steep penalty. Thanks, Steve |
From: Grouchy S. <sys...@gr...> - 2019-03-24 03:05:07
|
Hello, I am also interested in an answer to this question. On 2/21/19 6:58 PM, Piotr Robert Konopelko wrote: > Hi Steve, > > It will be possible to answer your question middle next week the earliest. > I am sorry for the delay in advance > > Best, > Piotr > > *Piotr Robert Konopelko*| m:+48 601 476 440 | e: > pio...@mo... <mailto:pio...@mo...> > *Business & Technical Support Manager* > MooseFS Client Support Team > > WWW <http://moosefs.com/> | GitHub > <https://github.com/moosefs/moosefs> | Twitter > <https://twitter.com/moosefs> | Facebook > <https://www.facebook.com/moosefs> | LinkedIn > <https://www.linkedin.com/company/moosefs> > >> On 20 Feb 2019, at 6:42 PM, Wilson, Steven M <st...@pu... >> <mailto:st...@pu...>> wrote: >> >> Hi, >> >> Most of my MooseFS storage systems use two chunk servers and store >> two copies of each file, one on each chunk server. I have one very >> active storage system that has four chunk servers but still only uses >> a goal of 2. I need to do some maintenance on one of these chunk >> servers and was thinking of putting it into maintenance mode so that >> there wouldn't be any unnecessary replication during the hour or two >> that this chunk server is out of service. But I noticed this in the >> MooseFS 3.0 manual: >> >> >> >> "Note: If number of Chunkservers in maintenance mode is equal or >> greater than 20% of all Chunkserver, MooseFS treats all >> Chunkservers like maintenance mode wouldn’t be enabled at all." >> >> >> If I understand this correctly, I would need at least six >> active chunk servers before I could take one offline for >> maintenance. Is that correct? If so, what is the reasoning behind >> this limitation? >> >> Not only is limiting needless replication important to me but even >> more important is the blocking of I/O operations to that chunk server >> while it is being shut down. Even when I have only two chunk >> servers, I would like to be able to enable maintenance mode for this >> particular benefit as described in the manual: >> >> >> "When you turn maintenance mode on for specific Chunkserver a few >> seconds before stop, MooseFS will finish write operations and >> won’t start a new ones on this Chunkserver." >> >> As always, thanks for you help and for a terrific distributed >> filesystem! >> >> Steve > > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Roman <int...@gm...> - 2019-03-19 18:27:29
|
Hi, All! After updating the master from 3.0.101 to 3.0.103, I noticed increase load on the cluster. Approximately 0.5 units for each storage element. Is it normal? OS: Centos 7. One of chunkserver. The same picture on others. Upgraded 18 march. Master have normal load. Roman |
From: Piotr R. K. <pio...@mo...> - 2019-02-22 02:58:36
|
Hi Steve, It will be possible to answer your question middle next week the earliest. I am sorry for the delay in advance Best, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 20 Feb 2019, at 6:42 PM, Wilson, Steven M <st...@pu...> wrote: > > Hi, > > Most of my MooseFS storage systems use two chunk servers and store two copies of each file, one on each chunk server. I have one very active storage system that has four chunk servers but still only uses a goal of 2. I need to do some maintenance on one of these chunk servers and was thinking of putting it into maintenance mode so that there wouldn't be any unnecessary replication during the hour or two that this chunk server is out of service. But I noticed this in the MooseFS 3.0 manual: > > "Note: If number of Chunkservers in maintenance mode is equal or greater than 20% of all Chunkserver, MooseFS treats all Chunkservers like maintenance mode wouldn’t be enabled at all." > > If I understand this correctly, I would need at least six active chunk servers before I could take one offline for maintenance. Is that correct? If so, what is the reasoning behind this limitation? > > Not only is limiting needless replication important to me but even more important is the blocking of I/O operations to that chunk server while it is being shut down. Even when I have only two chunk servers, I would like to be able to enable maintenance mode for this particular benefit as described in the manual: > > "When you turn maintenance mode on for specific Chunkserver a few seconds before stop, MooseFS will finish write operations and won’t start a new ones on this Chunkserver." > > As always, thanks for you help and for a terrific distributed filesystem! > > Steve |
From: Piotr R. K. <pio...@mo...> - 2019-02-21 01:45:31
|
Hi David, please reformat it and connect as mentioned earlier – and if there would be the situation that issue would repeat, maybe there is some issue with HDD itself. Best, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 19 Feb 2019, at 2:28 AM, David Myer <dav...@pr...> wrote: > > Hi Piotr, > > Thank you for the prompt response - here are some details on the disk: > > > Via $ smartctl -a /dev/sdb1: > > Raw_Read_Error_Rate: 0 > Throughput_Performance: 150 > Spin_Up_Time: 591 (Average 665) > Start_Stop_Count: 40 > Reallocated_Sector_Ct: 0 > Seek_Error_Rate: 0 > Seek_Time_Performance: 33 > Power_On_Hours: 32285 > Spin_Retry_Count: 0 > Power_Cycle_Count: 40 > Power-Off_Retract_Count: 695 > Load_Cycle_Count: 695 > Temperature_Celsius: 38 (Min/Max 21/46) > Reallocated_Event_Count: 0 > Current_Pending_Sector: 0 > Offline_Uncorrectable: 0 > UDMA_CRC_Error_Count: 0 > > > Via $ cat /proc/mounts: > /dev/sdb1 /disk3 ext4 rw,seclabel,relatime,data=ordered 0 0 > > > Would you recommend reformatting as xfs rather than ext4? > > Thanks, > Dave > > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ > On Monday, February 18, 2019 4:55 PM, Piotr Robert Konopelko <pio...@ge...> wrote: > >> Hi Dave, >> Thank you for your e-mail. >> >> Are there any "reallocated sectors" of this HDD? If no, it should be fine to reformat it (just in case) and add to Chunkserver configuration then (as empty). If the issue will persist (i.e. MooseFS will mark it as damaged soon again), you should look for H/W issue (HDD itself / controller / SATA cable etc.). >> >> What file system are you using? Does it have compression enabled? If compression is enabled, MooseFS may incorrectly consider such a disk as damaged, as number of blocks can change on compressed file systems. Then for such a compressed underlying filesystem you should consider using "~" before path to mounted partition (see more in man mfshdd.cfg and comments directly in this file). >> >> Hope it helps. >> >> Best regards, >> Piotr >> >> Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> >> Business & Technical Support Manager >> MooseFS Client Support Team >> >> WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> >> >>> On 19 Feb 2019, at 1:22 AM, David Myer via moosefs-users <moo...@li... <mailto:moo...@li...>> wrote: >>> >>> Dear MooseFS Users, >>> >>> I had a 1TB disk in my cluster that had the "damaged" status. I followed the removal steps and took it out of the cluster. I then performed a SMART long test which passed. >>> >>> Should I continue with replacement of the disk, or should I just reformat it and re-add it to the cluster? If there's nothing wrong with the disk I would rather avoid the time and potential risks of hot swapping it from a live chunkserver. >>> >>> Any advice would be appreciated. >>> >>> Thanks, >>> Dave > |
From: Tom I. H. <ti...@ha...> - 2019-02-20 18:34:35
|
Piotr Robert Konopelko <pio...@ge...> writes: > it has been fixed in this commit: > https://github.com/moosefs/moosefs/commit/02a58bf61e93fe82afe22f374e62912d89a66f13 Thank you, Piotr! :) -tih -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay |
From: Wilson, S. M <st...@pu...> - 2019-02-20 17:57:44
|
Hi, Most of my MooseFS storage systems use two chunk servers and store two copies of each file, one on each chunk server. I have one very active storage system that has four chunk servers but still only uses a goal of 2. I need to do some maintenance on one of these chunk servers and was thinking of putting it into maintenance mode so that there wouldn't be any unnecessary replication during the hour or two that this chunk server is out of service. But I noticed this in the MooseFS 3.0 manual: ? "Note: If number of Chunkservers in maintenance mode is equal or greater than 20% of all Chunkserver, MooseFS treats all Chunkservers like maintenance mode wouldn't be enabled at all." ? If I understand this correctly, I would need at least six active chunk servers before I could take one offline for maintenance. Is that correct? If so, what is the reasoning behind this limitation? Not only is limiting needless replication important to me but even more important is the blocking of I/O operations to that chunk server while it is being shut down. Even when I have only two chunk servers, I would like to be able to enable maintenance mode for this particular benefit as described in the manual: ? "When you turn maintenance mode on for specific Chunkserver a few seconds before stop, MooseFS will finish write operations and won't start a new ones on this Chunkserver.?" ??As always, thanks for you help and for a terrific distributed filesystem! Steve |
From: David M. <dav...@pr...> - 2019-02-19 01:28:32
|
Hi Piotr, Thank you for the prompt response - here are some details on the disk: Via $ smartctl -a /dev/sdb1: Raw_Read_Error_Rate: 0 Throughput_Performance: 150 Spin_Up_Time: 591 (Average 665) Start_Stop_Count: 40 Reallocated_Sector_Ct: 0 Seek_Error_Rate: 0 Seek_Time_Performance: 33 Power_On_Hours: 32285 Spin_Retry_Count: 0 Power_Cycle_Count: 40 Power-Off_Retract_Count: 695 Load_Cycle_Count: 695 Temperature_Celsius: 38 (Min/Max 21/46) Reallocated_Event_Count: 0 Current_Pending_Sector: 0 Offline_Uncorrectable: 0 UDMA_CRC_Error_Count: 0 Via $ cat /proc/mounts: /dev/sdb1 /disk3 ext4 rw,seclabel,relatime,data=ordered 0 0 Would you recommend reformatting as xfs rather than ext4? Thanks, Dave ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, February 18, 2019 4:55 PM, Piotr Robert Konopelko <pio...@ge...> wrote: > Hi Dave, > Thank you for your e-mail. > > Are there any "reallocated sectors" of this HDD? If no, it should be fine to reformat it (just in case) and add to Chunkserver configuration then (as empty). If the issue will persist (i.e. MooseFS will mark it as damaged soon again), you should look for H/W issue (HDD itself / controller / SATA cable etc.). > > What file system are you using? Does it have compression enabled? If compression is enabled, MooseFS may incorrectly consider such a disk as damaged, as number of blocks can change on compressed file systems. Then for such a compressed underlying filesystem you should consider using "~" before path to mounted partition (see more in man mfshdd.cfg and comments directly in this file). > > Hope it helps. > > Best regards, > Piotr > > Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... > Business & Technical Support Manager > MooseFS Client Support Team > > [WWW](http://moosefs.com/) | [GitHub](https://github.com/moosefs/moosefs) | [Twitter](https://twitter.com/moosefs) | [Facebook](https://www.facebook.com/moosefs) | [LinkedIn](https://www.linkedin.com/company/moosefs) > >> On 19 Feb 2019, at 1:22 AM, David Myer via moosefs-users <moo...@li...> wrote: >> >> Dear MooseFS Users, >> >> I had a 1TB disk in my cluster that had the "damaged" status. I followed the removal steps and took it out of the cluster. I then performed a SMART long test which passed. >> >> Should I continue with replacement of the disk, or should I just reformat it and re-add it to the cluster? If there's nothing wrong with the disk I would rather avoid the time and potential risks of hot swapping it from a live chunkserver. >> >> Any advice would be appreciated. >> >> Thanks, >> Dave |
From: Piotr R. K. <pio...@ge...> - 2019-02-19 00:56:07
|
Hi Dave, Thank you for your e-mail. Are there any "reallocated sectors" of this HDD? If no, it should be fine to reformat it (just in case) and add to Chunkserver configuration then (as empty). If the issue will persist (i.e. MooseFS will mark it as damaged soon again), you should look for H/W issue (HDD itself / controller / SATA cable etc.). What file system are you using? Does it have compression enabled? If compression is enabled, MooseFS may incorrectly consider such a disk as damaged, as number of blocks can change on compressed file systems. Then for such a compressed underlying filesystem you should consider using "~" before path to mounted partition (see more in man mfshdd.cfg and comments directly in this file). Hope it helps. Best regards, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 19 Feb 2019, at 1:22 AM, David Myer via moosefs-users <moo...@li...> wrote: > > Dear MooseFS Users, > > I had a 1TB disk in my cluster that had the "damaged" status. I followed the removal steps and took it out of the cluster. I then performed a SMART long test which passed. > > Should I continue with replacement of the disk, or should I just reformat it and re-add it to the cluster? If there's nothing wrong with the disk I would rather avoid the time and potential risks of hot swapping it from a live chunkserver. > > Any advice would be appreciated. > > Thanks, > Dave |
From: David M. <dav...@pr...> - 2019-02-19 00:23:08
|
Dear MooseFS Users, I had a 1TB disk in my cluster that had the "damaged" status. I followed the removal steps and took it out of the cluster. I then performed a SMART long test which passed. Should I continue with replacement of the disk, or should I just reformat it and re-add it to the cluster? If there's nothing wrong with the disk I would rather avoid the time and potential risks of hot swapping it from a live chunkserver. Any advice would be appreciated. Thanks, Dave |
From: Piotr R. K. <pio...@ge...> - 2019-02-18 10:13:38
|
Hi Tom, it has been fixed in this commit: https://github.com/moosefs/moosefs/commit/02a58bf61e93fe82afe22f374e62912d89a66f13 <https://github.com/moosefs/moosefs/commit/02a58bf61e93fe82afe22f374e62912d89a66f13> Best regards, Piotr Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> Business & Technical Support Manager MooseFS Client Support Team WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 5 Feb 2019, at 2:12 PM, Piotr Robert Konopelko <pio...@ge...> wrote: > > Hi Tom, > We have changed humanize_number() name in MooseFS sources in order not to collide with NetBSD :) > > Best, > Piotr > > Piotr Robert Konopelko | m: +48 601 476 440 | e: pio...@mo... <mailto:pio...@mo...> > Business & Technical Support Manager > MooseFS Client Support Team > > WWW <http://moosefs.com/> | GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > >> On 20 Jan 2019, at 6:59 PM, Tom Ivar Helbekkmo via moosefs-users <moo...@li... <mailto:moo...@li...>> wrote: >> >> I run MooseFS on NetBSD, and while the new mfsmetadirinfo is a nice >> addition, it doesn't compile cleanly for me out of the box. The reason >> is that NetBSD, like the other BSDs, has a humanize_number() already, so >> when you create your own function by that name, there's a collision. >> >> You haven't observed this on FreeBSD, because they want you to link with >> libutil, and #include <libutil.h> in your source code, to get access to >> this function. In NetBSD, it's in the standard C library, and stdlib.h. >> >> Our humanize_number() originated in NetBSD, but has been adopted by the >> other BSDs, and is also available on Linux, where you link with libbsd, >> and #include <bsd/stdlib.h>, to get access to it. It works exactly the >> same way on all these systems. >> >> I'm appending the change I've made locally - if you'd like to do >> something similar in the official distribution, to avoid maintaining >> your own humanize_number() in the future, you might apply this, or >> something like it, with the addition of #ifdef bits to add the right >> #include directives for your supported operating systems. >> >> Alternatively, if you'd rather keep your own implementation, consider >> this a polite request to change its name so it doesn't clash with the >> existing humanize_number() function... ;) >> >> My local modification: >> >> diff --git a/mfsmetatools/mfsmetadirinfo.c b/mfsmetatools/mfsmetadirinfo.c >> index 66cffe5..672e186 100644 >> --- a/mfsmetatools/mfsmetadirinfo.c >> +++ b/mfsmetatools/mfsmetadirinfo.c >> @@ -14,68 +14,22 @@ >> //static uint8_t humode=0; >> //static uint8_t numbermode=0; >> >> +// For the humanized representation of numbers, decide how many >> +// digits will be used before moving to the next scale factor. >> +// The value 4 here means you'll get up to "9999 MiB" before it >> +// changes to "10 GiB", while 3 jumps from "999 MiB" to "1 GiB". >> +// The magic '3' in the calculation below is for a space, a >> +// scaling letter, and a terminating 0 byte for the result. >> +#define HUMAN_DIGITS 4 >> +#define HUMAN_SUFFIX "iB" >> +#define HUMAN_LENGTH (HUMAN_DIGITS+3+strlen(HUMAN_SUFFIX)) >> + >> enum { >> STATUS_OK = 0, >> STATUS_ENOENT = 1, >> STATUS_ANY = 2 >> }; >> >> -#define PHN_USESI 0x01 >> -#define PHN_USEIEC 0x00 >> -char* humanize_number(uint64_t number,uint8_t flags) { >> - static char numbuf[6]; // [ "xxx" , "xx" , "x" , "x.x" ] + ["" , "X" , "Xi"] >> - uint64_t divisor; >> - uint16_t b; >> - uint8_t i; >> - uint8_t scale; >> - >> - if (flags & PHN_USESI) { >> - divisor = 1000; >> - } else { >> - divisor = 1024; >> - } >> - if (number>(UINT64_MAX/100)) { >> - number /= divisor; >> - number *= 100; >> - scale = 1; >> - } else { >> - number *= 100; >> - scale = 0; >> - } >> - while (number>=99950) { >> - number /= divisor; >> - scale+=1; >> - } >> - i=0; >> - if (number<995 && scale>0) { >> - b = ((uint32_t)number + 5) / 10; >> - numbuf[i++]=(b/10)+'0'; >> - numbuf[i++]='.'; >> - numbuf[i++]=(b%10)+'0'; >> - } else { >> - b = ((uint32_t)number + 50) / 100; >> - if (b>=100) { >> - numbuf[i++]=(b/100)+'0'; >> - b%=100; >> - } >> - if (b>=10 || i>0) { >> - numbuf[i++]=(b/10)+'0'; >> - b%=10; >> - } >> - numbuf[i++]=b+'0'; >> - } >> - if (scale>0) { >> - if (flags&PHN_USESI) { >> - numbuf[i++]="-kMGTPE"[scale]; >> - } else { >> - numbuf[i++]="-KMGTPE"[scale]; >> - numbuf[i++]='i'; >> - } >> - } >> - numbuf[i++]='\0'; >> - return numbuf; >> -} >> - >> typedef struct _metasection { >> off_t offset; >> uint64_t length; >> @@ -633,6 +587,7 @@ int calc_dirinfos(FILE *fd) { >> >> void print_result_plain(FILE *ofd) { >> dirinfostate *dis; >> + char numbuf[16]; >> fprintf(ofd,"------------------------------\n"); >> for (dis = dishead ; dis!=NULL ; dis=dis->next) { >> fprintf(ofd,"path: %s\n",dis->path); >> @@ -643,11 +598,16 @@ void print_result_plain(FILE *ofd) { >> fprintf(ofd,"chunks: %"PRIu64"\n",liset_card(dis->chunk_liset)); >> fprintf(ofd," keep chunks: %"PRIu64"\n",dis->s.kchunks); >> fprintf(ofd," arch chunks: %"PRIu64"\n",dis->s.achunks); >> - fprintf(ofd,"length: %"PRIu64" = %5sB\n",dis->s.length,humanize_number(dis->s.length,PHN_USEIEC)); >> - fprintf(ofd,"size: %"PRIu64" = %5sB\n",dis->s.size,humanize_number(dis->s.size,PHN_USEIEC)); >> - fprintf(ofd,"keep size: %"PRIu64" = %5sB\n",dis->s.keeprsize,humanize_number(dis->s.keeprsize,PHN_USEIEC)); >> - fprintf(ofd,"arch size: %"PRIu64" = %5sB\n",dis->s.archrsize,humanize_number(dis->s.archrsize,PHN_USEIEC)); >> - fprintf(ofd,"real size: %"PRIu64" = %5sB\n",dis->s.rsize,humanize_number(dis->s.rsize,PHN_USEIEC)); >> + humanize_number(numbuf,HUMAN_LENGTH,dis->s.length,HUMAN_SUFFIX,HN_AUTOSCALE,0); >> + fprintf(ofd,"length: %"PRIu64" = %s\n",dis->s.length,numbuf); >> + humanize_number(numbuf,HUMAN_LENGTH,dis->s.size,HUMAN_SUFFIX,HN_AUTOSCALE,0); >> + fprintf(ofd,"size: %"PRIu64" = %s\n",dis->s.size,numbuf); >> + humanize_number(numbuf,HUMAN_LENGTH,dis->s.keeprsize,HUMAN_SUFFIX,HN_AUTOSCALE,0); >> + fprintf(ofd,"keep size: %"PRIu64" = %s\n",dis->s.keeprsize,numbuf); >> + humanize_number(numbuf,HUMAN_LENGTH,dis->s.archrsize,HUMAN_SUFFIX,HN_AUTOSCALE,0); >> + fprintf(ofd,"arch size: %"PRIu64" = %s\n",dis->s.archrsize,numbuf); >> + humanize_number(numbuf,HUMAN_LENGTH,dis->s.rsize,HUMAN_SUFFIX,HN_AUTOSCALE,0); >> + fprintf(ofd,"real size: %"PRIu64" = %s\n",dis->s.rsize,numbuf); >> } else { >> fprintf(ofd,"path not found !!!\n"); >> } >> >> -tih >> -- >> Most people who graduate with CS degrees don't understand the significance >> of Lisp. Lisp is the most important idea in computer science. --Alan Kay >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |