You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Wilson, S. M <st...@pu...> - 2016-11-02 19:31:40
|
Hi, We have several workstations that are using the latest version of mfsmount (3.0.84) and I've started to receive complaints about very slow performance. I ran a few tests (untarring the Linux kernel source) and it appears that on the 3.0.84 clients performance will continue to degrade each time I run the test. For example, one workstation shows these results from three successive runs: ? root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf linux-4.9-rc3 real 4m34.614s user 0m1.416s sys 0m7.480s real 2m57.863s user 0m0.436s sys 0m2.192s root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf linux-4.9-rc3 real 6m59.159s user 0m1.924s sys 0m7.276s real 5m39.582s user 0m0.484s sys 0m2.548s root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf linux-4.9-rc3 real 9m1.816s user 0m1.928s sys 0m7.160s real 7m54.979s user 0m0.584s sys 0m2.968s ? ?If I unmount the file system and then mount it again, performance returns to normal but will degrade over time like before. On the other hand, if I run the same test on a different client using mfsmount 3.0.81 then my performance remains stable and doesn't degrade over time after heavy use. Is there perhaps a problem with mfsmount versions higher than 3.0.81? I should add that none of my servers (master, metalogger, chunk) are running higher than 3.0.81 so this could be due to a mismatch between mfsmount and server version. I doubt this but I wanted to mention it. Just to mention in passing, the same test on a local disk is blazingly fast in comparison. I understand that this is a really tortuous test for a distributed file system but the performance discrepancy is quite substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178 seconds for the rm). Here are the timings on the local disk:? stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm -rf linux-4.9-rc3 real 0m5.419s user 0m0.304s sys 0m2.368s real 0m1.038s user 0m0.052s sys 0m0.976s ? Thanks, Steve |
From: Marin B. <li...@ol...> - 2016-10-20 19:19:33
|
Hi, Thank you for taking the time to deal with this. I have tried to mount MFS with explit -o mfscachemode=YES on a FreeBSD 10.3-RELEASE server with MooseFS 3.0.84. It works fine. It also seems to work with mfscachemode=AUTO. Do you recommend using it or should we stick to the YES mode? I also tried to mount MFS with the same options on a FreeBSD 11.0-RELEASE-p1 box, with MooseFS 3.0.81. This time, specifying the cache mode did not fix the issue: append mode is still overwriting the destination file. I do not know if this happens because I'm using an older version of the client (3.0.81 vs 3.0.84), or if this is related to changes in FreeBSD 11.0 regarding the FUSE implementation. Since we're dealing with cache, I noticed another strange thing, both on FreeBSD 10.3 and 11.0. If my understanding is correct, data caching is disabled by default by mfsmount when it's running on FreeBSD. Yet, it seems caching is still operating behind the scene. Here is what I noticed a few days ago: - Let's say node A and node B have access to the same MFS file system, and run mfsmount with default options. - The file system contains a file, e.g. "file.txt". - If I run "cat file.txt" on both nodes, I see the same contents. - Now, node A deletes the file. - Then, node A creates a new file with the same name but different contents. - On node A: "cat file.txt" displays the contents of the new file - On node B: "cat file.txt" displays the contents of the old file How can this happen if data caching is disabled? Is it related to FreeBSD own caching process? Do you know about a way to fix this? Many thanks, again, for your time. Marin. On Thu, 20 Oct 2016 15:39:48 +0200 Aleksander Wieliczko <ale...@mo...> wrote: > Hi, > Thank you for all this information. > > I would like to inform that during our investigation we have noticed, > that problem is connected with fuse/kernel. > Problem appears when we disable cache. In such a situation, when we open > file with O_APPEND flag, write process starts with offset 0, and size of > the file is the size of last written data. > > MooseFS client by default starts with mfscachemode=DIRECT on FreeBSD > operating system, because only then FreeBSD kernel can use larger block > size than 4k for read and write operations. > We are trying to find the best solution for this issue but this process > can take some time. > > Right now we strongly advise to mount MooseFS client with > mfscachemode=YES option. > > mfsmount -o mfscachemode=YES -H mfsmaster.host /mfs/path > > This step will cause performance decrease but you will be able to append > file properly. > > > We will inform you, when we find some reasonable solution for this case. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> -- Marin Bernard <li...@ol...> |
From: Aleksander W. <ale...@mo...> - 2016-10-20 13:40:01
|
Hi, Thank you for all this information. I would like to inform that during our investigation we have noticed, that problem is connected with fuse/kernel. Problem appears when we disable cache. In such a situation, when we open file with O_APPEND flag, write process starts with offset 0, and size of the file is the size of last written data. MooseFS client by default starts with mfscachemode=DIRECT on FreeBSD operating system, because only then FreeBSD kernel can use larger block size than 4k for read and write operations. We are trying to find the best solution for this issue but this process can take some time. Right now we strongly advise to mount MooseFS client with mfscachemode=YES option. mfsmount -o mfscachemode=YES -H mfsmaster.host /mfs/path This step will cause performance decrease but you will be able to append file properly. We will inform you, when we find some reasonable solution for this case. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |
From: Marin B. <li...@ol...> - 2016-10-15 16:25:13
|
Thank you! I just tried again with 3.0.84 on FreeBSD 10.3. Same issue:We'll try to check and reproduce it this weekI did not test with FreeBSD 11.0, as it is not yet supported and there is no official repository available for that very recent release. What's more, 3.0.84 haven't reached the FreeBSD ports tree yet.It is in its way :)http://portsmon.freebsd.org/portoverview.py?category=&portname=moosefs3&wildcard=yeshttps://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213395#c3https://svnweb.freebsd.org/ports?view=revision&revision=423890Best regards,Peter-- Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail: pio...@mo... www: https://moosefs.com This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. On 15 Oct 2016, at 6:09 PM, Marin Bernard wrote:Hi Alex,I just tried again with 3.0.84 on FreeBSD 10.3. Same issue:$ uname -aFreeBSD hostname 10.3-RELEASE-p7 FreeBSD 10.3-RELEASE-p7 #0: Thu Aug 11 18:38:15 UTC 2016 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64$ pkg info | grep moosefsmoosefs3-client-3.0.84 MooseFS client toolsI did not test with FreeBSD 11.0, as it is not yet supported and there is no official repository available for that very recent release. What's more, 3.0.84 haven't reached the FreeBSD ports tree yet.Thanks!Marin.15 octobre 2016 17:52 Aleksander Wieliczko a écrit:Hi, Please update your client to 3.0.84 version and test it again. Alex------------------------------------------------------------------------------Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot_________________________________________moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moose... |
From: Piotr R. K. <pio...@mo...> - 2016-10-15 16:19:54
|
> I just tried again with 3.0.84 on FreeBSD 10.3. Same issue: We'll try to check and reproduce it this week > I did not test with FreeBSD 11.0, as it is not yet supported and there is no official repository available for that very recent release. What's more, 3.0.84 haven't reached the FreeBSD ports tree yet. It is in its way :) http://portsmon.freebsd.org/portoverview.py?category=&portname=moosefs3&wildcard=yes <http://portsmon.freebsd.org/portoverview.py?category=&portname=moosefs3&wildcard=yes> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213395#c3 <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213395#c3> https://svnweb.freebsd.org/ports?view=revision&revision=423890 <https://svnweb.freebsd.org/ports?view=revision&revision=423890> Best regards, Peter -- <https://moosefs.com/>Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail: pio...@mo... <mailto:pio...@mo...> www: https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 15 Oct 2016, at 6:09 PM, Marin Bernard <li...@ol...> wrote: > > Hi Alex, > > I just tried again with 3.0.84 on FreeBSD 10.3. Same issue: > > $ uname -a > FreeBSD hostname 10.3-RELEASE-p7 FreeBSD 10.3-RELEASE-p7 #0: Thu Aug 11 18:38:15 UTC 2016 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 > > $ pkg info | grep moosefs > moosefs3-client-3.0.84 MooseFS client tools > > I did not test with FreeBSD 11.0, as it is not yet supported and there is no official repository available for that very recent release. What's more, 3.0.84 haven't reached the FreeBSD ports tree yet. > > Thanks! > > Marin. > > 15 octobre 2016 17:52 Aleksander Wieliczko a écrit: > Hi, > Please update your client to 3.0.84 version and test it again. > > Alex > > > > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Marin B. <li...@ol...> - 2016-10-15 16:10:22
|
Hi Alex, I just tried again with 3.0.84 on FreeBSD 10.3. Same issue: $ uname -a FreeBSD hostname 10.3-RELEASE-p7 FreeBSD 10.3-RELEASE-p7 #0: Thu Aug 11 18:38:15 UTC 2016 ro...@am...:/usr/obj/usr/src/sys/GENERIC amd64 $ pkg info | grep moosefs moosefs3-client-3.0.84 MooseFS client tools I did not test with FreeBSD 11.0, as it is not yet supported and there is no official repository available for that very recent release. What's more, 3.0.84 haven't reached the FreeBSD ports tree yet. Thanks! Marin. 15 octobre 2016 17:52 Aleksander Wieliczko a écrit: Hi, Please update your client to 3.0.84 version and test it again. Alex |
From: Aleksander W. <ale...@mo...> - 2016-10-15 15:53:10
|
<p dir="ltr">Hi,<br> Please update your client to 3.0.84 version and test it again.</p> <p dir="ltr">Alex</p> |
From: Marin B. <li...@ol...> - 2016-10-15 15:32:04
|
Hi, I've just installed the MooseFS client (3.0.81) on a brand new FreeBSD 11.0 workstation. It runs fine except with the append operator (>>) which is not working as expected. More precisely, it produces the same effect as the output redirection operator (>), and overwrites the whole file instead of appending data to it. $ uname -a FreeBSD nb-00 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016 ro...@re...:/usr/obj/usr/src/sys/GENERIC amd64 Basic appending on a local volume (ZFS): $ echo test1 > testfile.txt $ cat testfile.txt test1 $ echo test2 >> testfile.txt $ cat testfile.txt test1 test2 Now the same ops on MFS: $ echo test1 > /mfs/testfile.txt $ cat /mfs/testfile.txt test1 $ echo test2 >> /mfs/testfile.txt $ cat /mfs/testfile.txt test2 I ran the same tests on an up-to-date FreeBSD 10.3 server and got the same results, so this is probably not new unless it is related to an update in the FUSE implementation. I thought it might be a caching issue, but it seems it's not: the file still has the same contents even after I unmounted/remounted MFS. I find this very alarming as it seems to be related to the way the MooseFS client is dealing with POSIX primitives. Should I worry ? Does anybody have an idea about why this is happening ? Thanks! Marin. |
From: Wilson, S. M <st...@pu...> - 2016-10-10 14:48:45
|
________________________________________ From: F. O. Ozbek <oz...@gm...> Sent: Friday, October 7, 2016 6:01 PM To: moo...@li... Subject: Re: [MooseFS-Users] mfsmaster memory usage On 10/06/2016 04:12 PM, Wilson, Steven M wrote: > That's not any worry unless the physical memory is being kept from being > used for something else that needs it. If the mfsmaster is repeatedly > touching memory that it's not really using, thus keeping it from being > paged out, while another application needs more, that would be silly. > If it's just leaving it sitting there until it can use it again, there's > no worry. > > So in your situation, the natural thing to do would be to make sure that > the mfsmaster has enough physical memory to handle the peak need, and > not make it run any other application that competes with it for the RAM. > > -tih I agree, it will be best to have a dedicated mfsmaster that doesn't do much of anything else anyways, so unless there is a real memory leak, there is no problem here. -- F. O. Ozbek ------------------------------------------------------------------------------ Thanks to you and Tom for your input! For the record, I do agree that a dedicated mfsmaster is the best approach. Unfortunately, we can't afford to put up a dedicated system for each of our MooseFS installations. Steve |
From: Piotr R. K. <pio...@mo...> - 2016-10-10 10:49:36
|
For MooseFS-Users information: All YuMing Zhao's problems has been resolved by upgrading his MooseFS instance to MooseFS 3.0. Peter -- Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail: pio...@mo... www: https://moosefs.com This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 21 Sep 2016, at 2:02 PM, 赵宇明 <yum...@le...> wrote: > > Hi Aleksander, > Thanks for your quick response. > You are right, under heavy system loads, the master server crushed. > Please excuse us for not able to provide much more detailed information, we are going to follow this issue and report back to you. > We also found some commands, such as "mfsmetarestore -a" 、 "mfsmetarestore -m metadata.mfs.back -o metadata.mfs changelog_ml.*.mfs" and "mfsfilerapair", we will try the steps that you suggested and these commands. At the same time, we also look for your help in this. > Thanks a lot. > > Best regards > YuMing Zhao > > > From: Aleksander Wieliczko > Date: 2016-09-21 18:55 > To: 赵宇明; moosefs-users > CC: 张伟军; 李学宝; 马军伟; 常战青 > Subject: Re: [MooseFS-Users] Asking for Help, MooseFS doesn't work. > Hi. > Great to hear that You are using our product. > > In fact MooseFS 1.6.25 is very, very old software. > MooseFS 1.6 has a lot of bugs and it's not supported any more. > > Please consider to update to 3.0.81 version. > > It looks like your master server crushed and you lost your metadata.mfs file(What had happened?) > Than you copied metadata from metalogger. > > First of all please add some changelogs from metalogger to master server and try to start master. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: F. O. O. <oz...@gm...> - 2016-10-07 22:01:24
|
On 10/06/2016 04:12 PM, Wilson, Steven M wrote: > That's not any worry unless the physical memory is being kept from being > used for something else that needs it. If the mfsmaster is repeatedly > touching memory that it's not really using, thus keeping it from being > paged out, while another application needs more, that would be silly. > If it's just leaving it sitting there until it can use it again, there's > no worry. > > So in your situation, the natural thing to do would be to make sure that > the mfsmaster has enough physical memory to handle the peak need, and > not make it run any other application that competes with it for the RAM. > > -tih I agree, it will be best to have a dedicated mfsmaster that doesn't do much of anything else anyways, so unless there is a real memory leak, there is no problem here. -- F. O. Ozbek |
From: Wilson, S. M <st...@pu...> - 2016-10-06 20:12:31
|
________________________________________ From: Tom Ivar Helbekkmo <ti...@ha...> Sent: Thursday, October 6, 2016 9:54 AM To: Wilson, Steven M Cc: moo...@li... Subject: Re: mfsmaster memory usage "Wilson, Steven M" <st...@pu...> writes: > On the "Info" page of the CGI display the first two lines under > "Memory usage detailed info" are "used" and "allocated". Looking at > our server that peaked at a little over 200 million files and is now > down to about 140 million files, I see 42GiB being used but 68GiB > allocated. Ah, so they're reusing data structures instead of freeing them up and reallocating them. An efficiency thing, maybe? In any case, whether it's the application or the malloc arena holding the unused memory doesn't really make a difference to the virtual size of the heap. I'm assuming that the added complexity in the application isn't making it leak memory, of course. :) > Using top, it looks like resident memory is almost equal to the amount > of virtual memory for the mfsmaster process indicating that the memory > is not only allocated but also occupying physical memory. That's not any worry unless the physical memory is being kept from being used for something else that needs it. If the mfsmaster is repeatedly touching memory that it's not really using, thus keeping it from being paged out, while another application needs more, that would be silly. If it's just leaving it sitting there until it can use it again, there's no worry. So in your situation, the natural thing to do would be to make sure that the mfsmaster has enough physical memory to handle the peak need, and not make it run any other application that competes with it for the RAM. -tih ------------------------------- I may be shooting myself in the foot but I always use LOCK_MEMORY for my chunk server and master server daemons. I think this will guarantee that any memory allocated to these processes won't be swapped out. Are there good reasons for not using LOCK_MEMORY? Most of my chunk servers and master servers are also "lightly" used as user workstations and I want to make sure that the server daemons aren't swapped out when competing with user applications for memory. Steve |
From: Ricardo J. B. <ric...@do...> - 2016-10-06 15:21:19
|
El Jueves 06/10/2016, Markus Koeberl escribió: [ ... ] > After reading > https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ today I > think I should enable systemd-networkd-wait-online.service to be sure that > it will always work. +1 to this, I always have to 'systemctl edit ZZZ.service' and add it by hand (not only for moosefs, though). Cheers! -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ |
From: Tom I. H. <ti...@ha...> - 2016-10-06 13:55:10
|
"Wilson, Steven M" <st...@pu...> writes: > On the "Info" page of the CGI display the first two lines under > "Memory usage detailed info" are "used" and "allocated". Looking at > our server that peaked at a little over 200 million files and is now > down to about 140 million files, I see 42GiB being used but 68GiB > allocated. Ah, so they're reusing data structures instead of freeing them up and reallocating them. An efficiency thing, maybe? In any case, whether it's the application or the malloc arena holding the unused memory doesn't really make a difference to the virtual size of the heap. I'm assuming that the added complexity in the application isn't making it leak memory, of course. :) > Using top, it looks like resident memory is almost equal to the amount > of virtual memory for the mfsmaster process indicating that the memory > is not only allocated but also occupying physical memory. That's not any worry unless the physical memory is being kept from being used for something else that needs it. If the mfsmaster is repeatedly touching memory that it's not really using, thus keeping it from being paged out, while another application needs more, that would be silly. If it's just leaving it sitting there until it can use it again, there's no worry. So in your situation, the natural thing to do would be to make sure that the mfsmaster has enough physical memory to handle the peak need, and not make it run any other application that competes with it for the RAM. -tih -- Elections cannot be allowed to change anything. --Dr. Wolfgang Schäuble |
From: Aleksander W. <ale...@mo...> - 2016-10-06 10:08:53
|
Hi, Thank you for all this information. Of course we will release MooseFS with native systemd for Debian8 and Ubuntu 16.04 as fast as it will be possible. I would like to add that we have native systemd support in Centos 7. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 10/06/2016 11:57 AM, Markus Koeberl wrote: > On Tuesday 20 September 2016 21:41:31 Aleksander Wieliczko wrote: >> Hi, >> Please check you /etc/mfs/mfschunkserver.cfg file and BIND_HOST parameter (if you have such one) >> It looks like your chunkserver have different IP addres in cfg file than your NIC IP. >> >> MooseFS ports are in range 9419-9425 > I had to change the motherboard of this machine some time ago. > In such a case I always disabled the static network configuration in /etc/network/interfaces, boot with the new motherboard, update /etc/udev/rules.d/70-persistent-net.rules so that the new MAC points to interface eth0 again, enable the network configuration in /etc/network/interfaces and perform a final reboot... > > With systemd things work differently now and I messed up, ended with no configuration in /etc/network/interfaces but a working network because of systemd and dhcp. Seems that some processes get started too early by systemd in that case. This also happened to the ganglia-monitor but got restarted by a monitoring script from ages ago when it sometimes crashed. > > Yesterday I did have the chance to reboot the machine again and everything works perfect with a static network configuration > > After reading https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ today I think I should enable systemd-networkd-wait-online.service to be sure that it will always work. > > By the way: Is there any plan for a native systemd configuration? > > > regards > Markus Köberl |
From: Markus K. <mar...@tu...> - 2016-10-06 09:58:04
|
On Tuesday 20 September 2016 21:41:31 Aleksander Wieliczko wrote: > Hi, > Please check you /etc/mfs/mfschunkserver.cfg file and BIND_HOST parameter (if you have such one) > It looks like your chunkserver have different IP addres in cfg file than your NIC IP. > > MooseFS ports are in range 9419-9425 I had to change the motherboard of this machine some time ago. In such a case I always disabled the static network configuration in /etc/network/interfaces, boot with the new motherboard, update /etc/udev/rules.d/70-persistent-net.rules so that the new MAC points to interface eth0 again, enable the network configuration in /etc/network/interfaces and perform a final reboot... With systemd things work differently now and I messed up, ended with no configuration in /etc/network/interfaces but a working network because of systemd and dhcp. Seems that some processes get started too early by systemd in that case. This also happened to the ganglia-monitor but got restarted by a monitoring script from ages ago when it sometimes crashed. Yesterday I did have the chance to reboot the machine again and everything works perfect with a static network configuration After reading https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ today I think I should enable systemd-networkd-wait-online.service to be sure that it will always work. By the way: Is there any plan for a native systemd configuration? regards Markus Köberl -- Markus Koeberl Graz University of Technology Signal Processing and Speech Communication Laboratory E-mail: mar...@tu... |
From: Wilson, S. M <st...@pu...> - 2016-10-05 13:49:01
|
> "Wilson, Steven M" <st...@pu...> writes: > >> I've noticed that the mfsmaster daemon doesn't seem to relinquish > memory once it has allocated it for metadata. > >I would be extremely surprised if that's the case. There may be memory >leaks in MooseFS -- programmers aren't infallible -- but something as >blatant as that I find hard to believe. I certainly don't think it's a blatant oversight on the part of the developers but rather a design decision. On the "Info" page of the CGI display the first two lines under "Memory usage detailed info" are "used" and "allocated". Looking at our server that peaked at a little over 200 million files and is now down to about 140 million files, I see 42GiB being used but 68GiB allocated. I think that only restarting mfsmaster will set the allocated memory back down to the memory that is actually needed and used. This can be seen graphically on the "Master Charts" tab under "memory usage". I see only an ever-increasing graph of memory usage *except* for those times when I've had to restart the mfsmaster daemon. > >You may be forgetting two things: > >- MooseFS has the ability to keep files around for a while after they're > deleted, so that you can undelete them. If you're using that feature, > there will be metadata still around until the deleted files time out. Good suggestion but I've taken this into consideration and my numbers reflect both active files and those that are in the trash area. > >- Unix doesn't normally shrink the virtual data segment size of a > process just because malloc'ed memory has been freed. It is > technically possible for it to do so, limited by the maximum virtual > address of malloc'ed memory still in use, but it's really not needed: > free'd virtual memory will be reused by malloc. Using top, it looks like resident memory is almost equal to the amount of virtual memory for the mfsmaster process indicating that the memory is not only allocated but also occupying physical memory. Steve |
From: Tom I. H. <ti...@ha...> - 2016-10-05 04:57:16
|
"Wilson, Steven M" <st...@pu...> writes: > I've noticed that the mfsmaster daemon doesn't seem to relinquish > memory once it has allocated it for metadata. I would be extremely surprised if that's the case. There may be memory leaks in MooseFS -- programmers aren't infallible -- but something as blatant as that I find hard to believe. You may be forgetting two things: - MooseFS has the ability to keep files around for a while after they're deleted, so that you can undelete them. If you're using that feature, there will be metadata still around until the deleted files time out. - Unix doesn't normally shrink the virtual data segment size of a process just because malloc'ed memory has been freed. It is technically possible for it to do so, limited by the maximum virtual address of malloc'ed memory still in use, but it's really not needed: free'd virtual memory will be reused by malloc. -tih -- Elections cannot be allowed to change anything. --Dr. Wolfgang Schäuble |
From: Wilson, S. M <st...@pu...> - 2016-10-04 19:58:39
|
?Hi, I've noticed that the mfsmaster daemon doesn't seem to relinquish memory once it has allocated it for metadata. We occasionally will have a spike in the number of files in a MooseFS file system which will drive the memory usage up on the mfsmaster server. But when the spike is over, the memory is not released by mfsmaster. Last week, for example, we had one server that jumped up to more than 200 million files but it normally only has around 100 million. So mfsmaster is still holding onto 69GiB of memory and will only release it if I restart mfsmaster, which is never convenient to do. Would freeing up unused memory in mfsmaster be something difficult to implement? If it's not too difficult, it should be worth consideration. Thanks, Steve |
From: Michael T. <mic...@ho...> - 2016-10-04 09:18:13
|
I will check out your approach. Thanks. ________________________________ From: Ben Harker <bj...@ba...> Sent: Tuesday, October 04, 2016 3:48:52 PM To: mic...@ho... Cc: moo...@li... Subject: Re: [MooseFS-Users] SSD caching Hey Michael, no program is needed, I just set up a bunch of chunkservers solely with SSD's, give them their own distinct label, and change the goals setup for my volume so that -C create flag always writes to that SSD set of machines (also, writing to a single node rather than two speeds up that initial write, just a tip), and the -K keep label then moves the data over to the main storage pool later on in the background. I've also tested using things like tmpfs and ramfs to create a similar ram caching tier but with no real solid results just yet, imagine that though! On 4 Oct 2016, at 08:39, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: That's great to hear. Can you say what particular caching program are you using? I'm reading up on bcache, flashcache, and enhanceio. I have 2 chunkservers in my setup with each chunkserver having around 8 HDDs now of varying sizes (1TB-4TB disks). Each chunkserver can house 12 disks and I'm looking at filling up 2 of the vacant slots with a SSDs in a RAID1 setup for cache. ________________________________ From: Ben Harker <bj...@ba...<mailto:bj...@ba...>> Sent: Tuesday, October 04, 2016 3:22:22 PM To: mic...@ho...<mailto:mic...@ho...> Cc: moo...@li...<mailto:moo...@li...> Subject: Re: [MooseFS-Users] SSD caching SSD cache tier (8-10 nodes with 128gb SSD with a single write goal, the keep goal of 2 sends it off to main storage) has drastically improved our writes, and we use our cluster as a video edit drive, it's awesome, and really takes the pressure off your spinning storage. On 4 Oct 2016, at 07:32, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: Hi. Is SSD caching (bcache, enhanceio, or flashcache) on chunkservers helpful in increasing throughput? Is it even recommended? --- mike t. <Part.002> <Part.003> <Part.002> <Part.003> |
From: Michael T. <mic...@ho...> - 2016-10-04 07:38:17
|
That's great to hear. Can you say what particular caching program are you using? I'm reading up on bcache, flashcache, and enhanceio. I have 2 chunkservers in my setup with each chunkserver having around 8 HDDs now of varying sizes (1TB-4TB disks). Each chunkserver can house 12 disks and I'm looking at filling up 2 of the vacant slots with a SSDs in a RAID1 setup for cache. ________________________________ From: Ben Harker <bj...@ba...> Sent: Tuesday, October 04, 2016 3:22:22 PM To: mic...@ho... Cc: moo...@li... Subject: Re: [MooseFS-Users] SSD caching SSD cache tier (8-10 nodes with 128gb SSD with a single write goal, the keep goal of 2 sends it off to main storage) has drastically improved our writes, and we use our cluster as a video edit drive, it's awesome, and really takes the pressure off your spinning storage. On 4 Oct 2016, at 07:32, Michael Tinsay <mic...@ho...<mailto:mic...@ho...>> wrote: Hi. Is SSD caching (bcache, enhanceio, or flashcache) on chunkservers helpful in increasing throughput? Is it even recommended? --- mike t. <Part.002> <Part.003> |
From: Michael T. <mic...@ho...> - 2016-10-04 06:37:48
|
Hi. I have two chunkservers that through the years that they have been operational have seen HDDs added and broken ones replaced. I noticed recently that the read and write ops are skewed towards the newer disks compared to the older ones. Is there a tool to rebalance the contents so as to get a more even distribution of io ops? --- mike t. |
From: Michael T. <mic...@ho...> - 2016-10-04 06:30:58
|
Hi. Is SSD caching (bcache, enhanceio, or flashcache) on chunkservers helpful in increasing throughput? Is it even recommended? --- mike t. |
From: Paweł K. <ma...@gm...> - 2016-09-28 10:05:51
|
Hello, How can i speed up balans between old chunk and new chunk servers ? We added new chunk servers and I wold like to speed up flush data from old(and slow) server .... What are suggested limit in CHUNKS_READ_REP_LIMIT and CHUNKS_WRITE_REP_LIMIT ? Maybe sobe other parameters ? -- Paweł "Argail" Kowalski "... krzesło pozostaje krzesłem, nawet kiedy nikt na nim nie siedzi ..." |
From: Joe L. <jo...@ge...> - 2016-09-23 20:07:04
|
On Sep 23, 2016, at 2:47 AM, Wolfgang <moo...@wo...> wrote: > > Hi Group! > > I'm using current moosefs 3.0.81-1 with 7 machines. > > All kind of hp proliant g5 machines (dl-160, dl-380, dl-385, ...) with > SATA discs creating about 25TB Storage > Master has 32GB of RAM > Load on all machines is low (<0.20) > all machines have 1gbit, master 2Gbit via LACP all connected by a > TP-Link TL-SG3424 managed switch. > > Now I want to copy 1,4TB files (photos, documents, ...) from mfs to a > 2TB SATA disc plugged into another proliant server as a backup of mfs. > For this I mounted the sata disc on /mnt and the mfswith > mfsmount /data/moos > and started > rsync -avz /data/moos/snapshot/stuff_2016-09-21 /mnt/backup-on-hdd/ > > But speed is very slow (about 2MB/s) > I think speed at the beginning was faster but now nearly stucked. > > For true speed mesurements I normally do a: > watch -d -n 10 du -sh /mnt/backup-on-hdd/ > so I see how much was transfered within 10 seconds and I divide this by > 10 so I get the data per second. > This is currently below 1 MB/s > > top shows a Load of 1.17 and a Wait on Disc of 25 at the moment. > > So speed of the data transfer is very very slow. > > So to test if the source or the target are slow I tried (while the rsync > is running) a: > sudo dd if=/dev/zero of=/mnt/test.bin bs=1M count=1024 > gives me 90MB/s > and > sudo dd if=/dev/zero of=/data/moos/test.bin bs=1M count=1024 > gives me 117MB/s so the source is not saturated nor the network nor the > target. > > > Can you help me getting better performance throughput ? > > > Thank you & greetings from austria > > -Wolfgang > I’m sort of thinking that rsync’s use of very small chunks of files to build it’s checksum data is what is causing your performance difficulties here. My first thought was that maybe trying to use the -W (whole-file) flag might improve things, but depending on your files, and if there are large files with small changes, that might be terribly inefficient. There is, however, a block size flag, ("-B , —block-size=SIZE” in the man page) which might let you choose a larger block size than rsync defaults to, which may let you use a more preferential block size from moosefs. Depending on your file sizes & how they’ve been modified between the last rsync and this one, I’d try one of those options (if applicable), and see if it helps rsync perform better. -Joe |