You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Davies L. <dav...@gm...> - 2018-05-10 14:50:51
|
max(N/CHUNKS_LOOP_MAX_CPS, CHUNKS_LOOP_MIN_TIME) N = 4.000.000 On Thu, May 10, 2018 at 5:32 AM, Gandalf Corvotempesta <gan...@gm...> wrote: > Can someone explain me, in details, how does CHUNKS_LOOP_MAX_CPS and > CHUNKS_LOOP_MIN_TIME works? > > If I have 4.000.000 chunks, how much time I need to loop them all? > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- - Davies |
From: Gandalf C. <gan...@gm...> - 2018-05-09 21:32:45
|
Can someone explain me, in details, how does CHUNKS_LOOP_MAX_CPS and CHUNKS_LOOP_MIN_TIME works? If I have 4.000.000 chunks, how much time I need to loop them all? |
From: Wilson, S. M <st...@pu...> - 2018-05-07 14:36:55
|
Hi, I'm considering implementing a dedicated network for our chunk servers to use soley for replication among themselves. By doing this, I hope to separate the chunk traffic from the clients from the replication traffic that takes place among the chunk servers. If my understanding is correct, this is not achieved by using the REMAP_* options in mfsmaster.cfg which only separates out the traffic to/from the master. If anyone else has done this, I'd be grateful to hear about your experience, especially in these two areas: 1) what level of performance improvement was seen 2) what needed to be done in the MooseFS configuration and OS networking to implement it ?Thanks! Steve |
From: Alexander A. <ba...@ya...> - 2018-04-19 08:18:00
|
On 19.04.2018 10:48, Alexander AKHOBADZE wrote: > > Hi Guys! > > No suggestions? It's a pity :--( > > I have a appreciable slowdown on a clients side and one of my chunk > server periodically goes offline > > (I see it in a webcgi. This chunk server now receives a massive > incoming replication). > > So in addition to my previous message I see in master's log many > messages like this: > > mfsmaster[77845]: (10.73.1.29:9422 -> 10.73.1.30:9422) chunk: > 00000000000A25AF replication status: Disconnected > > and in chunk server's log: > > Apr 19 10:20:19 proxmox-2 mfsmount[1839]: registered to master > Apr 19 10:20:25 proxmox-2 mfschunkserver[17093]: masterconn: > connection closed by master > Apr 19 10:20:25 proxmox-2 mfschunkserver[17093]: replicator: > connection lost > > Chunk servers are connected to network via 10 GB link. Master via 1 > GB. No ping lost between any servers. > > Please help! > > > On 17.04.2018 10:24, Alexander AKHOBADZE wrote: >> Hi All! >> >> I have a many messages in a Master's log like this: >> >> mfsmaster[...]: connection with METALOGGER-SYNC(...) has been closed >> by peer >> mfsmaster[...]: connection with client(ip:...) has been closed by peer >> >> mfsmaster[...]: chunkserver register begin (packet version: 6) - ip: >> ... / port: 9422, usedspace: ..., totalspace: ... >> mfsmaster[...]: server ip: ... / port: 9422 has been fully removed >> from data structures >> mfsmaster[...]: chunkserver register end (packet version: 6) - ip: >> ... / port: 9422 >> >> What does it mean and how to fix it ? >> >> I ask because I see some slowdowns of the whole cluster operation (I >> guess there are a lot of reconnections) >> >> while all servers are online and well working. >> >> If any additional info is needed - I am ready to provide it. Many >> thanks! >> >> wbr >> >> Alexander >> > |
From: Alexander A. <ba...@ya...> - 2018-04-19 07:49:07
|
Hi Guys! No suggestions? It's a pity :--( I have a appreciable slowdown on a clients side and one of my chunk server periodically goes offline (I see it in a webcgi. This chunk server now receives a massive incoming replication). So in addition to my previous message I see in master's log many messages like this: mfsmaster[77845]: (10.73.1.29:9422 -> 10.73.1.30:9422) chunk: 00000000000A25AF replication status: Disconnected and in chunk server's log: Apr 19 10:20:19 proxmox-2 mfsmount[1839]: registered to master Apr 19 10:20:25 proxmox-2 mfschunkserver[17093]: masterconn: connection closed by master Apr 19 10:20:25 proxmox-2 mfschunkserver[17093]: replicator: connection lost Chunk servers are connected to network via 10 GB link. Master via 1 GB. No ping lost between any servers. Please help! On 17.04.2018 10:24, Alexander AKHOBADZE wrote: > Hi All! > > I have a many messages in a Master's log like this: > > mfsmaster[...]: connection with METALOGGER-SYNC(...) has been closed > by peer > mfsmaster[...]: connection with client(ip:...) has been closed by peer > > mfsmaster[...]: chunkserver register begin (packet version: 6) - ip: > ... / port: 9422, usedspace: ..., totalspace: ... > mfsmaster[...]: server ip: ... / port: 9422 has been fully removed > from data structures > mfsmaster[...]: chunkserver register end (packet version: 6) - ip: ... > / port: 9422 > > What does it mean and how to fix it ? > > I ask because I see some slowdowns of the whole cluster operation (I > guess there are a lot of reconnections) > > while all servers are online and well working. > > If any additional info is needed - I am ready to provide it. Many thanks! > > wbr > > Alexander > |
From: Alexander A. <ba...@ya...> - 2018-04-17 07:30:50
|
Hi All! I have a many messages in a Master's log like this: mfsmaster[...]: connection with METALOGGER-SYNC(...) has been closed by peer mfsmaster[...]: connection with client(ip:...) has been closed by peer mfsmaster[...]: chunkserver register begin (packet version: 6) - ip: ... / port: 9422, usedspace: ..., totalspace: ... mfsmaster[...]: server ip: ... / port: 9422 has been fully removed from data structures mfsmaster[...]: chunkserver register end (packet version: 6) - ip: ... / port: 9422 What does it mean and how to fix it ? I ask because I see some slowdowns of the whole cluster operation (I guess there are a lot of reconnections) while all servers are online and well working. If any additional info is needed - I am ready to provide it. Many thanks! wbr Alexander |
From: Michael T. <mic...@ho...> - 2018-04-10 08:04:43
|
It just happened again a few hours ago. :( Same VM guest. --- mike t. ________________________________ From: Michael Tinsay <mic...@ho...> Sent: Tuesday, April 03, 2018 1:48:08 PM To: Piotr Robert Konopelko Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Missing chunks: "WRONG VERSION" Hi Peter, No. I migrated them from 2.x to 3.0.91 sometime March 2017. --- mike t. ________________________________ From: Piotr Robert Konopelko <pio...@mo...> Sent: Tuesday, April 03, 2018 1:15 PM To: Michael Tinsay Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Missing chunks: "WRONG VERSION" Hi Mike, Have you ever had on this instance any of the Chunkservers in the following versions: 3.0.75, 3.0.76, 3.0.77? Best regards, Peter -- Piotr Robert Konopelko | +48 601 476 440 MooseFS Client Support Team | moosefs.com<http://moosefs.com> // Sent from my phone, sorry for condensed form On 3 Apr 2018, at 4:14 AM, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: Hi, A strange thing has been happening over these past two weeks. For three times now, one or two missing chunks appeared with "WRONG VERSION" as the type of missing chunks as shown in the GUI. What is strange is that all these missing chunks occured only for the same file. Here's my (partial) setup: 1x Master 3x Chunkservers -- all files have goal=3, and 1 chunkserver is doing a rebalancing 1x Client All are running at version 3.0.100. The client is a KVM host running small VMs whose disks are stored are .raw/.img files under /mnt/moosefs/images. The affected file is always the raw disk image of a VM (ProxyServer02): Missing files (gathered by previous file-loop) # paths inode index chunk id type of missing chunk 1 ./TRASH (KVM/images/ProxyServer02-hda.raw) 12261261 200 0000000001DD5970 WRONG VERSIONS 2 ./TRASH (KVM/images/ProxyServer02-hda.raw) 12261261 206 0000000001DD5978 WRONG VERSIONS I've been running this setup for several years now but only now have I encountered this. It is a good thing that the affected VM is not mission critical and we can recover from this issue within an hour of detecting it. Where should I begin to troubleshoot this? --- mike t. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://Slashdot.org>! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michael T. <mic...@ho...> - 2018-04-03 05:48:18
|
Hi Peter, No. I migrated them from 2.x to 3.0.91 sometime March 2017. --- mike t. ________________________________ From: Piotr Robert Konopelko <pio...@mo...> Sent: Tuesday, April 03, 2018 1:15 PM To: Michael Tinsay Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Missing chunks: "WRONG VERSION" Hi Mike, Have you ever had on this instance any of the Chunkservers in the following versions: 3.0.75, 3.0.76, 3.0.77? Best regards, Peter -- Piotr Robert Konopelko | +48 601 476 440 MooseFS Client Support Team | moosefs.com<http://moosefs.com> // Sent from my phone, sorry for condensed form On 3 Apr 2018, at 4:14 AM, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: Hi, A strange thing has been happening over these past two weeks. For three times now, one or two missing chunks appeared with "WRONG VERSION" as the type of missing chunks as shown in the GUI. What is strange is that all these missing chunks occured only for the same file. Here's my (partial) setup: 1x Master 3x Chunkservers -- all files have goal=3, and 1 chunkserver is doing a rebalancing 1x Client All are running at version 3.0.100. The client is a KVM host running small VMs whose disks are stored are .raw/.img files under /mnt/moosefs/images. The affected file is always the raw disk image of a VM (ProxyServer02): Missing files (gathered by previous file-loop) # paths inode index chunk id type of missing chunk 1 ./TRASH (KVM/images/ProxyServer02-hda.raw) 12261261 200 0000000001DD5970 WRONG VERSIONS 2 ./TRASH (KVM/images/ProxyServer02-hda.raw) 12261261 206 0000000001DD5978 WRONG VERSIONS I've been running this setup for several years now but only now have I encountered this. It is a good thing that the affected VM is not mission critical and we can recover from this issue within an hour of detecting it. Where should I begin to troubleshoot this? --- mike t. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://Slashdot.org>! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Piotr R. K. <pio...@mo...> - 2018-04-03 05:33:13
|
Hi Mike, Have you ever had on this instance any of the Chunkservers in the following versions: 3.0.75, 3.0.76, 3.0.77? Best regards, Peter -- Piotr Robert Konopelko | +48 601 476 440 MooseFS Client Support Team | moosefs.com // Sent from my phone, sorry for condensed form > On 3 Apr 2018, at 4:14 AM, Michael Tinsay <mic...@ho...> wrote: > > Hi, > > A strange thing has been happening over these past two weeks. For three times now, one or two missing chunks appeared with "WRONG VERSION" as the type of missing chunks as shown in the GUI. What is strange is that all these missing chunks occured only for the same file. > > Here's my (partial) setup: > > 1x Master > 3x Chunkservers -- all files have goal=3, and 1 chunkserver is doing a rebalancing > 1x Client > > All are running at version 3.0.100. > > The client is a KVM host running small VMs whose disks are stored are .raw/.img files under /mnt/moosefs/images. The affected file is always the raw disk image of a VM (ProxyServer02): > > Missing files (gathered by previous file-loop) > # > paths > inode > index > chunk id > type of missing chunk > 1 > ./TRASH (KVM/images/ProxyServer02-hda.raw) > 12261261 > 200 > 0000000001DD5970 > WRONG VERSIONS > 2 > ./TRASH (KVM/images/ProxyServer02-hda.raw) > 12261261 > 206 > 0000000001DD5978 > WRONG VERSIONS > > > I've been running this setup for several years now but only now have I encountered this. It is a good thing that the affected VM is not mission critical and we can recover from this issue within an hour of detecting it. > > Where should I begin to troubleshoot this? > > > --- mike t. > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michael T. <mic...@ho...> - 2018-04-03 02:14:21
|
Hi, A strange thing has been happening over these past two weeks. For three times now, one or two missing chunks appeared with "WRONG VERSION" as the type of missing chunks as shown in the GUI. What is strange is that all these missing chunks occured only for the same file. Here's my (partial) setup: 1x Master 3x Chunkservers -- all files have goal=3, and 1 chunkserver is doing a rebalancing 1x Client All are running at version 3.0.100. The client is a KVM host running small VMs whose disks are stored are .raw/.img files under /mnt/moosefs/images. The affected file is always the raw disk image of a VM (ProxyServer02): Missing files (gathered by previous file-loop) # paths inode index chunk id type of missing chunk 1 ./TRASH (KVM/images/ProxyServer02-hda.raw) 12261261 200 0000000001DD5970 WRONG VERSIONS 2 ./TRASH (KVM/images/ProxyServer02-hda.raw) 12261261 206 0000000001DD5978 WRONG VERSIONS I've been running this setup for several years now but only now have I encountered this. It is a good thing that the affected VM is not mission critical and we can recover from this issue within an hour of detecting it. Where should I begin to troubleshoot this? --- mike t. |
From: Warren M. <wa...@an...> - 2018-03-21 22:07:37
|
That's what I thought..but wanted to make sure 😊 Warren Myers https://antipaucity.com ________________________________ From: Piotr Robert Konopelko <pio...@ge...> Sent: Wednesday, March 21, 2018 17:57 To: Warren Myers Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Maximum file size vs chunkserver size Hi Warren, In your case with exactly 12 Chunkservers it would be 120 GB. In general it would be RAW space (sum of HDDs (minus a little, like 256 MB per 1 Chunkserver)) divided by goal. Why? Because MooseFS divides files into chunks and distributes them across chunkservers, so size of one Chunkserver is not a limit for MooseFS :) Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com<http://moosefs.com> GitHub<https://github.com/moosefs/moosefs> | Twitter<https://twitter.com/moosefs> | Facebook<https://www.facebook.com/moosefs> | LinkedIn<https://www.linkedin.com/company/moosefs> On 21 Mar 2018, at 10:43 PM, Warren Myers <wa...@an...<mailto:wa...@an...>> wrote: If I have, say, a dozen 20GB chunkservers with a replication set to 2 copies, what is the largest file I can save on the MooseFS mount? Is it 20GB (minus a little)? Or is it 120GB (minus a little)? Or something else entirely? Thanks, Warren Myers https://antipaucity.com<https://antipaucity.com/> ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://slashdot.org/>! http://sdm.link/slashdot_________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Piotr R. K. <pio...@ge...> - 2018-03-21 21:57:55
|
Hi Warren, In your case with exactly 12 Chunkservers it would be 120 GB. In general it would be RAW space (sum of HDDs (minus a little, like 256 MB per 1 Chunkserver)) divided by goal. Why? Because MooseFS divides files into chunks and distributes them across chunkservers, so size of one Chunkserver is not a limit for MooseFS :) Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com <http://moosefs.com/> GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 21 Mar 2018, at 10:43 PM, Warren Myers <wa...@an...> wrote: > > If I have, say, a dozen 20GB chunkservers with a replication set to 2 copies, what is the largest file I can save on the MooseFS mount? > > Is it 20GB (minus a little)? Or is it 120GB (minus a little)? > > Or something else entirely? > > Thanks, > > Warren Myers > https://antipaucity.com <https://antipaucity.com/>------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot_________________________________________ <http://sdm.link/slashdot_________________________________________> > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Warren M. <wa...@an...> - 2018-03-21 21:44:08
|
If I have, say, a dozen 20GB chunkservers with a replication set to 2 copies, what is the largest file I can save on the MooseFS mount? Is it 20GB (minus a little)? Or is it 120GB (minus a little)? Or something else entirely? Thanks, Warren Myers https://antipaucity.com |
From: Alexander A. <ba...@ya...> - 2018-03-20 06:23:50
|
Hi! As far "Support of standard ACLs in addition to Unix file modes " I my case (v.3.0.100) getfacl and setfacl work fine. Do you mean this? wbr Alexander On 19.03.2018 23:16, Marin Bernard wrote: > Hi, > > I have been using MooseFS CE for 4 or 5 years now and I'm really happy > with it. Now I have a few questions about the project roadmap, and the > differences between MooseFS CE and Pro editions. > > As far I as know, MooseFS CE and Pro edition ship with exactly the > same feature set, except for mfsmaster HA which is only available with > a Pro license.However, the moosefs.com website mentions several > features which seem not included in the current release of MooseFS CE. > Here is a short list: > > * Computation on Nodes > * Erasure coding with up to 9 parity sums > * Support of standard ACLs in addition to Unix file modes > * SNMP management interface > * Data compression or data deduplication? (not sure on this; deduced > from "MooseFS enables users to save a lot of HDD space maintaining the > same data redundancy level.") > > I may be wrong but as far as I know, none of those wonderful features > are part of the MooseFS 3.x CE branch. Will these features ship with > the next major ("MooseFS 4.0") version?More importantly: will those > features require Pro licensing? > > Could you please clarify those points for me? > > Thank you, > > Marin. > > ------------------------------------------------------------------------------ > > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Marin B. <li...@ol...> - 2018-03-19 20:16:46
|
Hi, I have been using MooseFS CE for 4 or 5 years now and I'm really happy with it. Now I have a few questions about the project roadmap, and the differences between MooseFS CE and Pro editions. As far I as know, MooseFS CE and Pro edition ship with exactly the same feature set, except for mfsmaster HA which is only available with a Pro license.However, the moosefs.com website mentions several features which seem not included in the current release of MooseFS CE. Here is a short list: * Computation on Nodes * Erasure coding with up to 9 parity sums * Support of standard ACLs in addition to Unix file modes * SNMP management interface * Data compression or data deduplication? (not sure on this; deduced from "MooseFS enables users to save a lot of HDD space maintaining the same data redundancy level.") I may be wrong but as far as I know, none of those wonderful features are part of the MooseFS 3.x CE branch. Will these features ship with the next major ("MooseFS 4.0") version?More importantly: will those features require Pro licensing? Could you please clarify those points for me? Thank you, Marin. |
From: Piotr R. K. <pio...@mo...> - 2018-03-16 17:06:24
|
Hi Alex, first field in the output is an i-node number. Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com <http://moosefs.com/> GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 16 Mar 2018, at 9:23 AM, Alexander AKHOBADZE <ba...@ya...> wrote: > > > Thanks a lot! It works! > > And what does the first field with digits from the output mean? > > > On 15.03.2018 21:38, Piotr Robert Konopelko wrote: >> Here you are! :) >> >> It can be improved, because there is a lot of unnecessary code just taken from MFSCGI / MFSCLI, but it is not very important. Running it without parameters will show you help how to use it. >> >> Best, >> Peter >> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <ba...@ya...> - 2018-03-16 08:24:03
|
Thanks a lot! It works! And what does the first field with digits from the output mean? On 15.03.2018 21:38, Piotr Robert Konopelko wrote: > Here you are! :) > > It can be improved, because there is a lot of unnecessary code just taken from MFSCGI / MFSCLI, but it is not very important. Running it without parameters will show you help how to use it. > > Best, > Peter > |
From: Alexander A. <ba...@ya...> - 2018-03-15 17:52:57
|
WOW!!! Grate!!! Yes, sure ! On 15.03.2018 19:06, Piotr Robert Konopelko wrote: > Hi, > >>>> In my case I have to put about 150 millions files/directories on Moose >>>> and I'm afraid rsync-ing will last for eterni > we have a script written which allows you to get recently modified files with configurable period of time. It scans changelogs and queries Master Server for these files' paths (so the best is to run it either on Master or on Metalogger). > I can share it with you if you would be interested. > > Best regards, > Peter > |
From: Piotr R. K. <pio...@mo...> - 2018-03-15 16:06:46
|
Hi, >>> In my case I have to put about 150 millions files/directories on Moose >>> and I'm afraid rsync-ing will last for eterni we have a script written which allows you to get recently modified files with configurable period of time. It scans changelogs and queries Master Server for these files' paths (so the best is to run it either on Master or on Metalogger). I can share it with you if you would be interested. Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com > On 15 Mar 2018, at 3:27 PM, Alexander AKHOBADZE <ba...@ya...> wrote: > > > And by the way ZFS also provides a deduplication ... I believe it will dramatically help to reduce space requirements for backup storage. > > From the other side turning deduplication on really slows down file operation on ZFS :--( > >> On 15.03.2018 16:38, Wilson, Steven M wrote: >>> Alexander, >>> >>> You're right; it can take a long time with a lot of files. Our largest MooseFS file system, in terms of number of files, has 100 million files. In this case, I use rsync with GNU parallel so that I'm running several rsync jobs simultaneously. Here's the command from my backup script that runs the rsync from parallel: >>> ls $src | parallel --jobs -1 'echo DIR: {}; rsync $args $src/{} $dst' 2>&1 >>> >>> Unless there's been a lot of activity on this file system, I can usually get it backed up in about 10 hours. >>> >>> Steve >>> >>> ________________________________________ >>> From: Alexander AKHOBADZE <ba...@ya...> >>> Sent: Thursday, March 15, 2018 9:23 AM >>> To: Wilson, Steven M >>> Subject: Re: [MooseFS-Users] Backing up MooseFS >>> >>> Hi Steve! Thank you for response! >>> >>> This solution attracts me also. >>> >>> And how many files/directories are you rsync-ing ? >>> And how mach time does in take? >>> >>> In my case I have to put about 150 millions files/directories on Moose >>> and I'm afraid rsync-ing will last for eternity... >>> >>> On 15.03.2018 16:09, Wilson, Steven M wrote: >>>> Hi Alexander, >>>> >>>> We copy data from our MooseFS file systems using rsync to backup servers that have ZFS file systems. ZFS provides snapshots and compression. Before each daily backup of a given MooseFS file system, we make a snapshot of the backup (and naming it with the date of the snapshot) and then start the backup. For our purposes, 45 days of backups are sufficient so we only keep the last 45 snapshots, deleting any older snapshots to save space. This has worked out quite well for us. >>>> >>>> Regards, >>>> Steve >>>> >>>> ________________________________________ >>>> From: Alexander AKHOBADZE <ba...@ya...> >>>> Sent: Wednesday, March 14, 2018 3:13 AM >>>> To: MooseFS-Users >>>> Subject: [MooseFS-Users] Backing up MooseFS >>>> >>>> Hi dear All! >>>> >>>> Could you share your experience in how do you do backups of content >>>> stored on MooseFS? >>>> >>>> What software do you use? >>>> >>>> What advantages and disadvantages do you have in your case? >>>> >>>> Thanks a lot! >>>> >>>> wbr >>>> Alexander >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>>> _________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com GitHub | Twitter | Facebook | LinkedIn |
From: Alexander A. <ba...@ya...> - 2018-03-15 14:27:51
|
And by the way ZFS also provides a deduplication ... I believe it will dramatically help to reduce space requirements for backup storage. From the other side turning deduplication on really slows down file operation on ZFS :--( > On 15.03.2018 16:38, Wilson, Steven M wrote: >> Alexander, >> >> You're right; it can take a long time with a lot of files. Our largest MooseFS file system, in terms of number of files, has 100 million files. In this case, I use rsync with GNU parallel so that I'm running several rsync jobs simultaneously. Here's the command from my backup script that runs the rsync from parallel: >> ls $src | parallel --jobs -1 'echo DIR: {}; rsync $args $src/{} $dst' 2>&1 >> >> Unless there's been a lot of activity on this file system, I can usually get it backed up in about 10 hours. >> >> Steve >> >> ________________________________________ >> From: Alexander AKHOBADZE <ba...@ya...> >> Sent: Thursday, March 15, 2018 9:23 AM >> To: Wilson, Steven M >> Subject: Re: [MooseFS-Users] Backing up MooseFS >> >> Hi Steve! Thank you for response! >> >> This solution attracts me also. >> >> And how many files/directories are you rsync-ing ? >> And how mach time does in take? >> >> In my case I have to put about 150 millions files/directories on Moose >> and I'm afraid rsync-ing will last for eternity... >> >> On 15.03.2018 16:09, Wilson, Steven M wrote: >>> Hi Alexander, >>> >>> We copy data from our MooseFS file systems using rsync to backup servers that have ZFS file systems. ZFS provides snapshots and compression. Before each daily backup of a given MooseFS file system, we make a snapshot of the backup (and naming it with the date of the snapshot) and then start the backup. For our purposes, 45 days of backups are sufficient so we only keep the last 45 snapshots, deleting any older snapshots to save space. This has worked out quite well for us. >>> >>> Regards, >>> Steve >>> >>> ________________________________________ >>> From: Alexander AKHOBADZE <ba...@ya...> >>> Sent: Wednesday, March 14, 2018 3:13 AM >>> To: MooseFS-Users >>> Subject: [MooseFS-Users] Backing up MooseFS >>> >>> Hi dear All! >>> >>> Could you share your experience in how do you do backups of content >>> stored on MooseFS? >>> >>> What software do you use? >>> >>> What advantages and disadvantages do you have in your case? >>> >>> Thanks a lot! >>> >>> wbr >>> Alexander >>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <ba...@ya...> - 2018-03-14 07:13:19
|
Hi dear All! Could you share your experience in how do you do backups of content stored on MooseFS? What software do you use? What advantages and disadvantages do you have in your case? Thanks a lot! wbr Alexander |
From: Michael T. <mic...@ho...> - 2018-02-19 21:02:42
|
I can confirm that it did work. 😊 --- mike t. ________________________________ From: Piotr Robert Konopelko <pio...@mo...> Sent: Monday, February 19, 2018 5:07 AM To: Michael Tinsay Cc: Davies Liu; MooseFS-Users Subject: Re: [MooseFS-Users] Is it safe to rename mountpoints in mfshdd.cfg? As Davies said it will work. I confirm :) Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com<http://moosefs.com/> GitHub<https://github.com/moosefs/moosefs> | Twitter<https://twitter.com/moosefs> | Facebook<https://www.facebook.com/moosefs> | LinkedIn<https://www.linkedin.com/company/moosefs> On 16 Feb 2018, at 9:25 AM, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: Cool! Thanks. Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Davies Liu <dav...@gm...<mailto:dav...@gm...>> Sent: Friday, February 16, 2018 11:24:22 AM To: Michael Tinsay Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Is it safe to rename mountpoints in mfshdd.cfg? Yes, it will work. On Wed, Feb 14, 2018 at 10:04 PM, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: > It's me again with a different question. > > > Through the years one of my chunkservers have gone through several disk > replacements, upgrades, and additions. Disks are mounted in /mnt and named > mfshdd<seq> with <seq> being a through z, ie. mfshdda, mfshddb, and so on. > It started out with 3 disks, mfshdd[a-c], and now it is up to the letter i. > mfshdd.cfg now looks like: > > > /mnt/mfshdda > > #/mnt/mfshddb > > /mnt/mfshddc > > /mnt/mfshddd > > #/mnt/mfshdde > > /mnt/mfshddf > > #mnt/mfshddg > > #mnt/mfshddh > > /mnt/mfshddi > > > I'm toying with the idea of changing a couple of the mountpoints so that I > have no holes in the sequence. The steps I'm thinking of is: > > > (a) stop moosefs-chunkserver > > (b) remount mfshddi as mfshddb, and mfshddf as mfshdde. > > (c) edit mfshdd.cfg to reflect this change > > (e) start moosefs-chunkserver > > > Is this doable without losing out the chunks that are expected to be in > mfshddf and mfshddi? They're still there anyway, just in a different > mountpoint. Will they be recognized as valid chunks when the chunkserver is > started back up and it scans the folders? > > > > --- mike t. > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org<http://slashdot.org/>! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li...<mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org<http://slashdot.org/>! http://sdm.link/slashdot_________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com<http://moosefs.com> GitHub<https://github.com/moosefs/moosefs> | Twitter<https://twitter.com/moosefs> | Facebook<https://www.facebook.com/moosefs> | LinkedIn<https://www.linkedin.com/company/moosefs> |
From: Piotr R. K. <pio...@mo...> - 2018-02-18 21:07:28
|
As Davies said it will work. I confirm :) Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com <http://moosefs.com/> GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> > On 16 Feb 2018, at 9:25 AM, Michael Tinsay <mic...@ho... <mailto:mic...@ho...>> wrote: > > Cool! Thanks. > > Get Outlook for iOS <https://aka.ms/o0ukef> > From: Davies Liu <dav...@gm... <mailto:dav...@gm...>> > Sent: Friday, February 16, 2018 11:24:22 AM > To: Michael Tinsay > Cc: MooseFS-Users > Subject: Re: [MooseFS-Users] Is it safe to rename mountpoints in mfshdd.cfg? > > Yes, it will work. > > On Wed, Feb 14, 2018 at 10:04 PM, Michael Tinsay > <mic...@ho... <mailto:mic...@ho...>> wrote: > > It's me again with a different question. > > > > > > Through the years one of my chunkservers have gone through several disk > > replacements, upgrades, and additions. Disks are mounted in /mnt and named > > mfshdd<seq> with <seq> being a through z, ie. mfshdda, mfshddb, and so on. > > It started out with 3 disks, mfshdd[a-c], and now it is up to the letter i. > > mfshdd.cfg now looks like: > > > > > > /mnt/mfshdda > > > > #/mnt/mfshddb > > > > /mnt/mfshddc > > > > /mnt/mfshddd > > > > #/mnt/mfshdde > > > > /mnt/mfshddf > > > > #mnt/mfshddg > > > > #mnt/mfshddh > > > > /mnt/mfshddi > > > > > > I'm toying with the idea of changing a couple of the mountpoints so that I > > have no holes in the sequence. The steps I'm thinking of is: > > > > > > (a) stop moosefs-chunkserver > > > > (b) remount mfshddi as mfshddb, and mfshddf as mfshdde. > > > > (c) edit mfshdd.cfg to reflect this change > > > > (e) start moosefs-chunkserver > > > > > > Is this doable without losing out the chunks that are expected to be in > > mfshddf and mfshddi? They're still there anyway, just in a different > > mountpoint. Will they be recognized as valid chunks when the chunkserver is > > started back up and it scans the folders? > > > > > > > > --- mike t. > > > > > > ------------------------------------------------------------------------------ > > Check out the vibrant tech community on one of the world's most > > engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot <http://sdm.link/slashdot> > > _________________________________________ > > moosefs-users mailing list > > moo...@li... <mailto:moo...@li...> > > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > > > > > -- > - Davies > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot_________________________________________ <http://sdm.link/slashdot_________________________________________> > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> Best regards, Peter -- Piotr Robert Konopelko | mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com <http://moosefs.com/> GitHub <https://github.com/moosefs/moosefs> | Twitter <https://twitter.com/moosefs> | Facebook <https://www.facebook.com/moosefs> | LinkedIn <https://www.linkedin.com/company/moosefs> |
From: Michael T. <mic...@ho...> - 2018-02-16 08:25:03
|
Cool! Thanks. Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: Davies Liu <dav...@gm...> Sent: Friday, February 16, 2018 11:24:22 AM To: Michael Tinsay Cc: MooseFS-Users Subject: Re: [MooseFS-Users] Is it safe to rename mountpoints in mfshdd.cfg? Yes, it will work. On Wed, Feb 14, 2018 at 10:04 PM, Michael Tinsay <mic...@ho...> wrote: > It's me again with a different question. > > > Through the years one of my chunkservers have gone through several disk > replacements, upgrades, and additions. Disks are mounted in /mnt and named > mfshdd<seq> with <seq> being a through z, ie. mfshdda, mfshddb, and so on. > It started out with 3 disks, mfshdd[a-c], and now it is up to the letter i. > mfshdd.cfg now looks like: > > > /mnt/mfshdda > > #/mnt/mfshddb > > /mnt/mfshddc > > /mnt/mfshddd > > #/mnt/mfshdde > > /mnt/mfshddf > > #mnt/mfshddg > > #mnt/mfshddh > > /mnt/mfshddi > > > I'm toying with the idea of changing a couple of the mountpoints so that I > have no holes in the sequence. The steps I'm thinking of is: > > > (a) stop moosefs-chunkserver > > (b) remount mfshddi as mfshddb, and mfshddf as mfshdde. > > (c) edit mfshdd.cfg to reflect this change > > (e) start moosefs-chunkserver > > > Is this doable without losing out the chunks that are expected to be in > mfshddf and mfshddi? They're still there anyway, just in a different > mountpoint. Will they be recognized as valid chunks when the chunkserver is > started back up and it scans the folders? > > > > --- mike t. > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Davies L. <dav...@gm...> - 2018-02-16 03:24:44
|
Yes, it will work. On Wed, Feb 14, 2018 at 10:04 PM, Michael Tinsay <mic...@ho...> wrote: > It's me again with a different question. > > > Through the years one of my chunkservers have gone through several disk > replacements, upgrades, and additions. Disks are mounted in /mnt and named > mfshdd<seq> with <seq> being a through z, ie. mfshdda, mfshddb, and so on. > It started out with 3 disks, mfshdd[a-c], and now it is up to the letter i. > mfshdd.cfg now looks like: > > > /mnt/mfshdda > > #/mnt/mfshddb > > /mnt/mfshddc > > /mnt/mfshddd > > #/mnt/mfshdde > > /mnt/mfshddf > > #mnt/mfshddg > > #mnt/mfshddh > > /mnt/mfshddi > > > I'm toying with the idea of changing a couple of the mountpoints so that I > have no holes in the sequence. The steps I'm thinking of is: > > > (a) stop moosefs-chunkserver > > (b) remount mfshddi as mfshddb, and mfshddf as mfshdde. > > (c) edit mfshdd.cfg to reflect this change > > (e) start moosefs-chunkserver > > > Is this doable without losing out the chunks that are expected to be in > mfshddf and mfshddi? They're still there anyway, just in a different > mountpoint. Will they be recognized as valid chunks when the chunkserver is > started back up and it scans the folders? > > > > --- mike t. > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |