You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Wilson, S. M <st...@pu...> - 2017-06-26 20:18:36
|
We normally set up two systems filled with disks (between 8 and 24, depending on the system). Each of the two systems runs a chunk server. One of the systems also runs the master server while the other one runs a metalogger server. The MooseFS volumes that are exported from these systems contain user home directories as well as data directories for both large image files and millions of small ASCII files. And we set up the default goal to be two copies of each file, one copy stored on each of the two chunk servers. The performance has been very acceptable overall. I couldn't tell from your original email if you were intending to run multiple chunk servers on this system or just have one chunk server that consolidated all the storage that you have available on that system. It may be a little tricky (and not recommended) to run multiple chunk servers on the same system although it *might* work if you put the chunk servers into containers (as mentioned on the LizardFS mail list) or set up a different mfschunkserver.cfg file for each chunk server making sure that you modified CSSERV_LISTEN_PORT so that it was different for each chunk server. Steve ________________________________ From: WK <wk...@bn...> Sent: Monday, June 26, 2017 3:49 PM To: moo...@li... Subject: Re: [MooseFS-Users] Running the master on the same host as a chunk? We have done so in an archive system, where the traffic doesn't have to be terribly responsive and tolerant to performance lags. Worked fine. It is really a resource issue. If the master has to compete with the chunkserver for RAM,cpu and disk i/o etc and/or you have lots of activity, then results are going to be uneven. -wk On 6/26/2017 10:38 AM, Warren Myers wrote: Use case: attaching multiple LUNs/NAS mounts/etc to a single server. Want to use those multiple mounts as a failed-over file system using MooseFS. Can I run the master on the same server as the chunk - so that each mount gets treated as a chunk, and the master distributes across them? Warren Myers https://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: WK <wk...@bn...> - 2017-06-26 19:49:15
|
We have done so in an archive system, where the traffic doesn't have to be terribly responsive and tolerant to performance lags. Worked fine. It is really a resource issue. If the master has to compete with the chunkserver for RAM,cpu and disk i/o etc and/or you have lots of activity, then results are going to be uneven. -wk On 6/26/2017 10:38 AM, Warren Myers wrote: > > Use case: attaching multiple LUNs/NAS mounts/etc to a single server. > Want to use those multiple mounts as a failed-over file system using > MooseFS. > > > Can I run the master on the same server as the chunk - so that each > mount gets treated as a chunk, and the master distributes across them? > > > > *Warren Myers* > https://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: David M. <dav...@pr...> - 2017-06-26 18:22:37
|
Hi Warren, I don't fully understand your question however I do run a single partition chunk server on the same dedicated server as my master and it performs well. If you're using Digital Ocean you may encounter performance issues. As per the MooseFS best practices it is recommended not to use virtualized servers. Cheers, Dave Sent with [ProtonMail](https://protonmail.com) Secure Email. > -------- Original Message -------- > Subject: [MooseFS-Users] Running the master on the same host as a chunk? > Local Time: June 26, 2017 1:38 PM > UTC Time: June 26, 2017 5:38 PM > From: wa...@an... > To: moo...@li... <moo...@li...> > > Use case: attaching multiple LUNs/NAS mounts/etc to a single server. Want to use those multiple mounts as a failed-over file system using MooseFS. > > Can I run the master on the same server as the chunk - so that each mount gets treated as a chunk, and the master distributes across them? > > Warren Myers > https://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a |
From: Warren M. <wa...@an...> - 2017-06-26 17:54:05
|
Use case: attaching multiple LUNs/NAS mounts/etc to a single server. Want to use those multiple mounts as a failed-over file system using MooseFS. Can I run the master on the same server as the chunk - so that each mount gets treated as a chunk, and the master distributes across them? Warren Myers https://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a |
From: R.C. <mil...@gm...> - 2017-06-17 06:35:19
|
Hi all I want to share with you my recent experience. Two days ago or so we experienced an unexpected power-loss. The master and 1 chunk-server shutdown properly but the remaining 2 chunk-servers had a degraded UPS battery that left them too early... After power-back the metadata was healthy (as expected) but 1 data chunk was lost. 1 file was affected and, even with mfsfilerepair, I had no chance of restoring it. At the beginning I thought it was due to write-cache on chunk-disks: in the case of that specific chunk was about to be written on those servers that had faulty UPS, most likely the write cache was still holding the data and it was then lost forever. Unfortunately, a rapid check of disks configuration negated that, being write cache disabled on those two servers. Moreover, no battery-backed RAID controllers are used for chunk disks. I'm quite sure write-barriers are enabled by default on CENTOS (the only distro we use here) How can I mitigate the possibility of experiencing such a problem again? (apart from changin' UPS batteries... :-) The goal is now set to 2. Should I increase to 3? Our system: Server1: master+chunk (2 dedicated HDs - XFS filesystem - write cache enabled) Server2: metalog+chunk (2 dedicated HDs - XFS filesystem - write cache disabled) Server3: metalog+chunk (2 dedicated HDs - XFS filesystem - write cache disabled) Server4: metalog+chunk (2 dedicated HDs - XFS filesystem - write cache disabled) Thanks for reading Bye Raffaello |
From: Piotr R. K. <pio...@mo...> - 2017-06-07 10:13:12
|
Dear MooseFS Users, as we prolonged our repository key, please follow the instructions available at https://moosefs.com/download.html to update the key on your machines. Ubuntu/Debian: wget -O - http://ppa.moosefs.com/moosefs.key | apt-key add - CentOS/RHEL: curl "http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS Best regards, Peter -- Piotr Robert Konopelko | Mobile: +48 601 476 440 MooseFS Client Support Team | moosefs.com |
From: Michael T. <mic...@ho...> - 2017-06-01 15:15:15
|
Thanks Alex. I'll get right to upgrading the master and chunkservers a.s.a.p. Best Regards. --- mike t. ________________________________ From: Aleksander Wieliczko <ale...@mo...> Sent: Thursday, June 1, 2017 5:31:52 PM To: Michael Tinsay; MooseFS-Users Subject: Re: [MooseFS-Users] ls -l results in input/output error on MooseFS Client 3.0.92 Hi. This is not supported configuration. Clients and chunkservers should be equal or lower than MooseFS master version. In your case MooseFS Master and MooseFS Chunkservers have lower version than MooseFS Client. Please understand that new communication packets will not be able to communicate with older components. Please read upgrade guide: https://moosefs.com/documentation/moosefs-3-0.html Best Regards Alex On 06/01/2017 11:19 AM, Michael Tinsay wrote: Hi! I recently updated most of my moosefs-client installations from 3.0.91 to 3.0.92. My master and chunkservers are still on 3.0.91. I'm investigating why a number of my nightly rsync jobs stopped working. I noticed this happening on 3.0.92: root@KVM03:~# ls /mnt/mfsroot/ServerMounts/ AptCache EDCISServer01 FileServer01 Syslog Zimbra01Store AptCacheII FileServer MailArchive Zimbra01 root@KVM03:~# ls -hl /mnt/mfsroot/ServerMounts/ ls: /mnt/mfsroot/ServerMounts/: Input/output error total 28M drwxr-xr-x 3 root root 2.9M Nov 21 2016 AptCache drwxrwxr-x 3 tinsaymc tinsaymc 2.9M Sep 8 2014 AptCacheII drwxrwxr-x 21 root 1001 2.9M May 29 16:29 EDCISServer01 drwxr-xr-x 4 root root 3.0M Aug 30 2014 FileServer drwxr-xr-x 4 root root 3.9M Aug 29 2014 FileServer01 drwxr-xr-x 9 root root 2.9M Aug 6 2014 MailArchive drwxrwxr-x 3 tinsaymc tinsaymc 2.0M Sep 20 2012 Syslog drwxr-xr-x 51 root root 3.9M Mar 29 00:15 Zimbra01 drwxr-xr-x 4 1001 1001 3.9M May 2 2011 Zimbra01Store root@KVM03:~# ls -l /mnt/mfsroot/ServerMounts/ ls: /mnt/mfsroot/ServerMounts/: Input/output error total 28431 drwxr-xr-x 3 root root 3001229 Nov 21 2016 AptCache drwxrwxr-x 3 tinsaymc tinsaymc 3002596 Sep 8 2014 AptCacheII drwxrwxr-x 21 root 1001 3009744 May 29 16:29 EDCISServer01 drwxr-xr-x 4 root root 3092157 Aug 30 2014 FileServer drwxr-xr-x 4 root root 4000170 Aug 29 2014 FileServer01 drwxr-xr-x 9 root root 3001875 Aug 6 2014 MailArchive drwxrwxr-x 3 tinsaymc tinsaymc 2001649 Sep 20 2012 Syslog drwxr-xr-x 51 root root 4000482 Mar 29 00:15 Zimbra01 drwxr-xr-x 4 1001 1001 4000407 May 2 2011 Zimbra01Store For some reason, moosefs is tripping up when the -l parameter is used in ls. This doesn't happen on the few clients that are still on 3.0.91. --- mike t. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2017-06-01 09:32:09
|
Hi. This is not supported configuration. Clients and chunkservers should be equal or lower than MooseFS master version. In your case MooseFS Master and MooseFS Chunkservers have lower version than MooseFS Client. Please understand that new communication packets will not be able to communicate with older components. Please read upgrade guide: https://moosefs.com/documentation/moosefs-3-0.html Best Regards Alex On 06/01/2017 11:19 AM, Michael Tinsay wrote: > > Hi! > > > I recently updated most of my moosefs-client installations from 3.0.91 > to 3.0.92. My master and chunkservers are still on 3.0.91. I'm > investigating why a number of my nightly rsync jobs stopped working. > I noticed this happening on 3.0.92: > > > root@KVM03:~# ls /mnt/mfsroot/ServerMounts/ > AptCache EDCISServer01 FileServer01 Syslog Zimbra01Store > AptCacheII FileServer MailArchive Zimbra01 > root@KVM03:~# ls -hl /mnt/mfsroot/ServerMounts/ > ls: /mnt/mfsroot/ServerMounts/: Input/output error > total 28M > drwxr-xr-x 3 root root 2.9M Nov 21 2016 AptCache > drwxrwxr-x 3 tinsaymc tinsaymc 2.9M Sep 8 2014 AptCacheII > drwxrwxr-x 21 root 1001 2.9M May 29 16:29 EDCISServer01 > drwxr-xr-x 4 root root 3.0M Aug 30 2014 FileServer > drwxr-xr-x 4 root root 3.9M Aug 29 2014 FileServer01 > drwxr-xr-x 9 root root 2.9M Aug 6 2014 MailArchive > drwxrwxr-x 3 tinsaymc tinsaymc 2.0M Sep 20 2012 Syslog > drwxr-xr-x 51 root root 3.9M Mar 29 00:15 Zimbra01 > drwxr-xr-x 4 1001 1001 3.9M May 2 2011 Zimbra01Store > root@KVM03:~# ls -l /mnt/mfsroot/ServerMounts/ > ls: /mnt/mfsroot/ServerMounts/: Input/output error > total 28431 > drwxr-xr-x 3 root root 3001229 Nov 21 2016 AptCache > drwxrwxr-x 3 tinsaymc tinsaymc 3002596 Sep 8 2014 AptCacheII > drwxrwxr-x 21 root 1001 3009744 May 29 16:29 EDCISServer01 > drwxr-xr-x 4 root root 3092157 Aug 30 2014 FileServer > drwxr-xr-x 4 root root 4000170 Aug 29 2014 FileServer01 > drwxr-xr-x 9 root root 3001875 Aug 6 2014 MailArchive > drwxrwxr-x 3 tinsaymc tinsaymc 2001649 Sep 20 2012 Syslog > drwxr-xr-x 51 root root 4000482 Mar 29 00:15 Zimbra01 > drwxr-xr-x 4 1001 1001 4000407 May 2 2011 Zimbra01Store > > For some reason, moosefs is tripping up when the -l parameter is used > in ls. This doesn't happen on the few clients that are still on 3.0.91. > > > > --- mike t. > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michael T. <mic...@ho...> - 2017-06-01 09:19:28
|
Hi! I recently updated most of my moosefs-client installations from 3.0.91 to 3.0.92. My master and chunkservers are still on 3.0.91. I'm investigating why a number of my nightly rsync jobs stopped working. I noticed this happening on 3.0.92: root@KVM03:~# ls /mnt/mfsroot/ServerMounts/ AptCache EDCISServer01 FileServer01 Syslog Zimbra01Store AptCacheII FileServer MailArchive Zimbra01 root@KVM03:~# ls -hl /mnt/mfsroot/ServerMounts/ ls: /mnt/mfsroot/ServerMounts/: Input/output error total 28M drwxr-xr-x 3 root root 2.9M Nov 21 2016 AptCache drwxrwxr-x 3 tinsaymc tinsaymc 2.9M Sep 8 2014 AptCacheII drwxrwxr-x 21 root 1001 2.9M May 29 16:29 EDCISServer01 drwxr-xr-x 4 root root 3.0M Aug 30 2014 FileServer drwxr-xr-x 4 root root 3.9M Aug 29 2014 FileServer01 drwxr-xr-x 9 root root 2.9M Aug 6 2014 MailArchive drwxrwxr-x 3 tinsaymc tinsaymc 2.0M Sep 20 2012 Syslog drwxr-xr-x 51 root root 3.9M Mar 29 00:15 Zimbra01 drwxr-xr-x 4 1001 1001 3.9M May 2 2011 Zimbra01Store root@KVM03:~# ls -l /mnt/mfsroot/ServerMounts/ ls: /mnt/mfsroot/ServerMounts/: Input/output error total 28431 drwxr-xr-x 3 root root 3001229 Nov 21 2016 AptCache drwxrwxr-x 3 tinsaymc tinsaymc 3002596 Sep 8 2014 AptCacheII drwxrwxr-x 21 root 1001 3009744 May 29 16:29 EDCISServer01 drwxr-xr-x 4 root root 3092157 Aug 30 2014 FileServer drwxr-xr-x 4 root root 4000170 Aug 29 2014 FileServer01 drwxr-xr-x 9 root root 3001875 Aug 6 2014 MailArchive drwxrwxr-x 3 tinsaymc tinsaymc 2001649 Sep 20 2012 Syslog drwxr-xr-x 51 root root 4000482 Mar 29 00:15 Zimbra01 drwxr-xr-x 4 1001 1001 4000407 May 2 2011 Zimbra01Store For some reason, moosefs is tripping up when the -l parameter is used in ls. This doesn't happen on the few clients that are still on 3.0.91. --- mike t. |
From: Piotr R. K. <pio...@mo...> - 2017-05-21 21:20:49
|
Hi Dave, > I would assume that all moosefs components (master, chunks, metaloggers, clients) are intended or highly recommended to be run within the same DC and better still under the same network switch. Could you please add this as an issue to https://github.com/moosefs/Documentation <https://github.com/moosefs/Documentation>? Best regards, Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com <https://moosefs.com/> > On 19 May 2017, at 6:12 AM, Piotr Robert Konopelko <pio...@mo...> wrote: > >> I would assume that all moosefs components (master, chunks, metaloggers, clients) are intended or highly recommended to be run within the same DC and better still under the same network switch. > > That is correct. > > > Best regards, > Peter > > -- > Piotr Robert Konopelko > MooseFS Client Support Team | moosefs.com <https://moosefs.com/> >> On 18 May 2017, at 8:47 PM, David Myer <dav...@pr... <mailto:dav...@pr...>> wrote: >> >> I have not seen any mention of this in the documentation but I would assume that all moosefs components (master, chunks, metaloggers, clients) are intended or highly recommended to be run within the same DC and better still under the same network switch. >> >> Cheers, >> Dave >> >>> -------- Original Message -------- >>> Subject: Re: [MooseFS-Users] slow lookup >>> Local Time: May 18, 2017 1:26 PM >>> UTC Time: May 18, 2017 5:26 PM >>> From: it...@da... <mailto:it...@da...> >>> To: David Myer <dav...@pr... <mailto:dav...@pr...>> >>> moo...@li... <mailto:moo...@li...> <moo...@li... <mailto:moo...@li...>> >>> >>> Different DC, but connection is stable 1GB. >>> >>> -- >>> Best regards, >>> Eugene >>> >>> >>> >>> >>>> 18 мая 2017 г., в 19:30, David Myer <dav...@pr... <mailto:dav...@pr...>> написал(а): >>>> >>>> 38ms is very high latency - are the servers in the same datacenter? You should expect <1ms network latency if they are. >>>> >>>> Cheers, >>>> Dave >>>> >>>> >>>> Sent with ProtonMail <https://protonmail.com/> Secure Email. >>>> >>>>> -------- Original Message -------- >>>>> Subject: Re: [MooseFS-Users] slow lookup >>>>> Local Time: May 18, 2017 5:13 AM >>>>> UTC Time: May 18, 2017 9:13 AM >>>>> From: it...@da... <mailto:it...@da...> >>>>> To: Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> >>>>> moo...@li... <mailto:moo...@li...> >>>>> >>>>> Ping stat: >>>>> 124 packets transmitted, 124 received, 0% packet loss, time 123143ms >>>>> rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms >>>>> >>>>> I found an issue with FUSE on master server, I cannot mount mfs localy: >>>>> fuse: device not found, try 'modprobe fuse' first >>>>> error in fuse_mount >>>>> >>>>> modprobe fuse >>>>> modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg) >>>>> >>>>> dmesg | grep fuse >>>>> [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) >>>>> [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) >>>>> [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) >>>>> >>>>> I will try to find out what is the issue with FUSE. Can it cause my lookup issue? >>>>> >>>>> >>>>> -- >>>>> Best regards, >>>>> Eugene >>>>> >>>>> >>>>> >>>>>> 18 мая 2017 г., в 12:01, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>>>>> >>>>>> Hi. >>>>>> Thank you for all this information. >>>>>> Hardware is really nice. >>>>>> >>>>>> Would you be so kind and execute this two tests in your environment? >>>>>> >>>>>> 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files >>>>>> 2. Execute ping command from "slow" MooseFS client to MooseFS master >>>>>> Best regards >>>>>> Aleksander Wieliczko >>>>>> Technical Support Engineer >>>>>> MooseFS.com <> >>>>>> >>>>>> On 18.05.2017 10:27, Eugene Diatlov wrote: >>>>>>> CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz >>>>>>> RAM 32 GB with ECC >>>>>>> 1 Gbit connection. >>>>>>> load average: 0.00, 0.01, 0.05 >>>>>>> Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux >>>>>>> >>>>>>> It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. >>>>>>> The problem only with lookup operation. >>>>>>> >>>>>>> mfs metadata files are placed on ssd system drive. read/write test: >>>>>>> >>>>>>> dd if=/dev/zero of=/test bs=1024 count=5024000 >>>>>>> 5024000+0 records in >>>>>>> 5024000+0 records out >>>>>>> 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s >>>>>>> >>>>>>> dd if=/test of=/dev/null bs=1024 count=5024000 >>>>>>> 5024000+0 records in >>>>>>> 5024000+0 records out >>>>>>> 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s >>>>>>> >>>>>>> mfschukserver raid read/write local test (not thru mfs): >>>>>>> dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 >>>>>>> 5024000+0 records in >>>>>>> 5024000+0 records out >>>>>>> 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s >>>>>>> >>>>>>> dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 >>>>>>> 5024000+0 records in >>>>>>> 5024000+0 records out >>>>>>> 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s >>>>>>> >>>>>>> -- >>>>>>> Best regards, >>>>>>> Eugene >>>>>>> >>>>>>> >>>>>>> >>>>>>>> 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>>>>>>> >>>>>>>> Hi, >>>>>>>> metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. >>>>>>>> >>>>>>>> Would you be so kind and tell us something more about this parameters? >>>>>>>> >>>>>>>> Best regards >>>>>>>> Aleksander Wieliczko >>>>>>>> Technical Support Engineer >>>>>>>> MooseFS.com <> >>>>>>>> >>>>>>>> On 18.05.2017 09:05, Ben Harker wrote: >>>>>>>>> I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. >>>>>>>>> >>>>>>>>> I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. >>>>>>>>> >>>>>>>>> Hope that's helpful! >>>>>>>>> >>>>>>>>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> <mailto:it...@da...> wrote: >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. >>>>>>>>> I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. >>>>>>>>> Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. >>>>>>>>> Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best regards, >>>>>>>>> Eugene >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Check out the vibrant tech community on one of the world's most >>>>>>>>> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot <http://sdm.link/slashdot> >>>>>>>>> >>>>>>>>> >>>>>>>>> _________________________________________ >>>>>>>>> moosefs-users mailing list >>>>>>>>> moo...@li... <mailto:moo...@li...> >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>>>>> >>>> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot_________________________________________ <http://sdm.link/slashdot_________________________________________> >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Wilson, S. M <st...@pu...> - 2017-05-19 21:22:06
|
Hi, The newer versions of Chromium have issues when its user configuration/profile directory is located on a MooseFS filesystem. This may be a Chromium bug but I thought I'd mention it on this list so that other users are aware of it. I've only tested versions 55 and 58 of Chromium. The issues appeared sometime after version 55 since that version seems to run fine while version 58 demonstrates the problems. Running Chromium v58 on a filesystem using mfsmount 3.0.86 doesn't work at all; it crashes at startup with this error (plus a stack trace): Received signal 7 BUS_ADRERR 7f6410e12000 Chromium v58 won't crash on later versions of mfsmount (3.0.88, 3.0.90, 3.0.92) but it is very sluggish at times. Version 55 of Chromium, on the other hand, seems to work fine on all mfsmount versions without crashing or sluggish behavior. If I mount the filesystem using "mfscachemode=DIRECT" then I'm able to run Chromium v58 on a filesystem using mfsmount 3.0.86 without crashing. And there is no apparent sluggishness. This testing is done on workstations running Ubuntu 16.04 and servers (chunkservers and master) running MooseFS 3.0.90. Regards, Steve |
From: Piotr R. K. <pio...@mo...> - 2017-05-19 04:12:47
|
> I would assume that all moosefs components (master, chunks, metaloggers, clients) are intended or highly recommended to be run within the same DC and better still under the same network switch. That is correct. Best regards, Peter -- Piotr Robert Konopelko MooseFS Client Support Team | moosefs.com <https://moosefs.com/> > On 18 May 2017, at 8:47 PM, David Myer <dav...@pr...> wrote: > > I have not seen any mention of this in the documentation but I would assume that all moosefs components (master, chunks, metaloggers, clients) are intended or highly recommended to be run within the same DC and better still under the same network switch. > > Cheers, > Dave > >> -------- Original Message -------- >> Subject: Re: [MooseFS-Users] slow lookup >> Local Time: May 18, 2017 1:26 PM >> UTC Time: May 18, 2017 5:26 PM >> From: it...@da... >> To: David Myer <dav...@pr...> >> moo...@li... <moo...@li...> >> >> Different DC, but connection is stable 1GB. >> >> -- >> Best regards, >> Eugene >> >> >> >> >>> 18 мая 2017 г., в 19:30, David Myer <dav...@pr... <mailto:dav...@pr...>> написал(а): >>> >>> 38ms is very high latency - are the servers in the same datacenter? You should expect <1ms network latency if they are. >>> >>> Cheers, >>> Dave >>> >>> >>> Sent with ProtonMail <https://protonmail.com/> Secure Email. >>> >>>> -------- Original Message -------- >>>> Subject: Re: [MooseFS-Users] slow lookup >>>> Local Time: May 18, 2017 5:13 AM >>>> UTC Time: May 18, 2017 9:13 AM >>>> From: it...@da... <mailto:it...@da...> >>>> To: Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> >>>> moo...@li... <mailto:moo...@li...> >>>> >>>> Ping stat: >>>> 124 packets transmitted, 124 received, 0% packet loss, time 123143ms >>>> rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms >>>> >>>> I found an issue with FUSE on master server, I cannot mount mfs localy: >>>> fuse: device not found, try 'modprobe fuse' first >>>> error in fuse_mount >>>> >>>> modprobe fuse >>>> modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg) >>>> >>>> dmesg | grep fuse >>>> [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) >>>> [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) >>>> [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) >>>> >>>> I will try to find out what is the issue with FUSE. Can it cause my lookup issue? >>>> >>>> >>>> -- >>>> Best regards, >>>> Eugene >>>> >>>> >>>> >>>>> 18 мая 2017 г., в 12:01, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>>>> >>>>> Hi. >>>>> Thank you for all this information. >>>>> Hardware is really nice. >>>>> >>>>> Would you be so kind and execute this two tests in your environment? >>>>> >>>>> 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files >>>>> 2. Execute ping command from "slow" MooseFS client to MooseFS master >>>>> Best regards >>>>> Aleksander Wieliczko >>>>> Technical Support Engineer >>>>> MooseFS.com <> >>>>> >>>>> On 18.05.2017 10:27, Eugene Diatlov wrote: >>>>>> CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz >>>>>> RAM 32 GB with ECC >>>>>> 1 Gbit connection. >>>>>> load average: 0.00, 0.01, 0.05 >>>>>> Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux >>>>>> >>>>>> It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. >>>>>> The problem only with lookup operation. >>>>>> >>>>>> mfs metadata files are placed on ssd system drive. read/write test: >>>>>> >>>>>> dd if=/dev/zero of=/test bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s >>>>>> >>>>>> dd if=/test of=/dev/null bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s >>>>>> >>>>>> mfschukserver raid read/write local test (not thru mfs): >>>>>> dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s >>>>>> >>>>>> dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s >>>>>> >>>>>> -- >>>>>> Best regards, >>>>>> Eugene >>>>>> >>>>>> >>>>>> >>>>>>> 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>>>>>> >>>>>>> Hi, >>>>>>> metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. >>>>>>> >>>>>>> Would you be so kind and tell us something more about this parameters? >>>>>>> >>>>>>> Best regards >>>>>>> Aleksander Wieliczko >>>>>>> Technical Support Engineer >>>>>>> MooseFS.com <> >>>>>>> >>>>>>> On 18.05.2017 09:05, Ben Harker wrote: >>>>>>>> I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. >>>>>>>> >>>>>>>> I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. >>>>>>>> >>>>>>>> Hope that's helpful! >>>>>>>> >>>>>>>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> <mailto:it...@da...> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. >>>>>>>> I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. >>>>>>>> Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. >>>>>>>> Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best regards, >>>>>>>> Eugene >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Check out the vibrant tech community on one of the world's most >>>>>>>> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot <http://sdm.link/slashdot> >>>>>>>> >>>>>>>> >>>>>>>> _________________________________________ >>>>>>>> moosefs-users mailing list >>>>>>>> moo...@li... <mailto:moo...@li...> >>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>>>> >>> > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: David M. <dav...@pr...> - 2017-05-18 18:47:23
|
I have not seen any mention of this in the documentation but I would assume that all moosefs components (master, chunks, metaloggers, clients) are intended or highly recommended to be run within the same DC and better still under the same network switch. Cheers, Dave -------- Original Message -------- Subject: Re: [MooseFS-Users] slow lookup Local Time: May 18, 2017 1:26 PM UTC Time: May 18, 2017 5:26 PM From: it...@da... To: David Myer <dav...@pr...> moo...@li... <moo...@li...> Different DC, but connection is stable 1GB. -- Best regards, Eugene 18 мая 2017 г., в 19:30, David Myer <dav...@pr...> написал(а): 38ms is very high latency - are the servers in the same datacenter? You should expect <1ms network latency if they are. Cheers, Dave Sent with [ProtonMail](https://protonmail.com/) Secure Email. -------- Original Message -------- Subject: Re: [MooseFS-Users] slow lookup Local Time: May 18, 2017 5:13 AM UTC Time: May 18, 2017 9:13 AM From: it...@da... To: Aleksander Wieliczko <ale...@mo...> moo...@li... Ping stat: 124 packets transmitted, 124 received, 0% packet loss, time 123143ms rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms I found an issue with FUSE on master server, I cannot mount mfs localy: fuse: device not found, try 'modprobe fuse' first error in fuse_mount modprobe fuse modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg) dmesg | grep fuse [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) I will try to find out what is the issue with FUSE. Can it cause my lookup issue? -- Best regards, Eugene 18 мая 2017 г., в 12:01, Aleksander Wieliczko <ale...@mo...> написал(а): Hi. Thank you for all this information. Hardware is really nice. Would you be so kind and execute this two tests in your environment? 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files 2. Execute ping command from "slow" MooseFS client to MooseFS master Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 18.05.2017 10:27, Eugene Diatlov wrote: CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz RAM 32 GB with ECC 1 Gbit connection. load average: 0.00, 0.01, 0.05 Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. The problem only with lookup operation. mfs metadata files are placed on ssd system drive. read/write test: dd if=/dev/zero of=/test bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s dd if=/test of=/dev/null bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s mfschukserver raid read/write local test (not thru mfs): dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s -- Best regards, Eugene 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo...> написал(а): Hi, metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. Would you be so kind and tell us something more about this parameters? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 18.05.2017 09:05, Ben Harker wrote: I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. Hope that's helpful! On May 18, 2017 07:06, Eugene Diatlov [<it...@da...>](mailto:it...@da...) wrote: Hi, I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. -- Best regards, Eugene ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, [Slashdot.org](http://slashdot.org/) ! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alex C. <ac...@in...> - 2017-05-18 18:30:37
|
BTW no, the fuse issue means your master probably has some incompatible packages installed - a strict works/doesn't work situation. On 18/05/17 19:21, Alex Crow wrote: > > It doesn't matter how stable your 1GB bandwidth is if you have that > kind of latency. There is no way you can expect to get the same lookup > performance if your Master/Client/Chunkserver setup has that kind of > geographical/topological separation. > > Alex > > > On 18/05/17 18:26, Eugene Diatlov wrote: >> Different DC, but connection is stable 1GB. >> >> -- >> Best regards, >> Eugene >> >> >> >> >>> 18 мая 2017 г., в 19:30, David Myer <dav...@pr... >>> <mailto:dav...@pr...>> написал(а): >>> >>> 38ms is very high latency - are the servers in the same datacenter? >>> You should expect <1ms network latency if they are. >>> >>> Cheers, >>> Dave >>> >>> >>> Sent with ProtonMail <https://protonmail.com/> Secure Email. >>> >>>> -------- Original Message -------- >>>> Subject: Re: [MooseFS-Users] slow lookup >>>> Local Time: May 18, 2017 5:13 AM >>>> UTC Time: May 18, 2017 9:13 AM >>>> From: it...@da... <mailto:it...@da...> >>>> To: Aleksander Wieliczko <ale...@mo... >>>> <mailto:ale...@mo...>> >>>> moo...@li... >>>> <mailto:moo...@li...> >>>> >>>> Ping stat: >>>> 124 packets transmitted, 124 received, 0% packet loss, time 123143ms >>>> rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms >>>> >>>> I found an issue with FUSE on master server, I cannot mount mfs localy: >>>> fuse: device not found, try 'modprobe fuse' first >>>> error in fuse_mount >>>> >>>> modprobe fuse >>>> modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, >>>> or unknown parameter (see dmesg) >>>> >>>> dmesg | grep fuse >>>> [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) >>>> [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) >>>> [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) >>>> >>>> I will try to find out what is the issue with FUSE. Can it cause my >>>> lookup issue? >>>> >>>> >>>> -- >>>> Best regards, >>>> Eugene >>>> >>>> >>>> >>>>> 18 мая 2017 г., в 12:01, Aleksander Wieliczko >>>>> <ale...@mo... >>>>> <mailto:ale...@mo...>> написал(а): >>>>> >>>>> Hi. >>>>> Thank you for all this information. >>>>> Hardware is really nice. >>>>> >>>>> Would you be so kind and execute this two tests in your environment? >>>>> >>>>> 1. Mount Moosefs Client on MooseFS master and execute time ls -l >>>>> in folder with over 100k files >>>>> 2. Execute ping command from "slow" MooseFS client to MooseFS master >>>>> >>>>> Best regards >>>>> Aleksander Wieliczko >>>>> Technical Support Engineer >>>>> MooseFS.com >>>>> >>>>> On 18.05.2017 10:27, Eugene Diatlov wrote: >>>>>> CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz >>>>>> RAM 32 GB with ECC >>>>>> 1 Gbit connection. >>>>>> load average: 0.00, 0.01, 0.05 >>>>>> Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 >>>>>> (2016-10-19) x86_64 GNU/Linux >>>>>> >>>>>> It is a totally new empty server. If I copy 1 gig file to mfs it >>>>>> tooks around 22 seconds, so it is couldn't be a network issue. >>>>>> The problem only with lookup operation. >>>>>> >>>>>> mfs metadata files are placed on ssd system drive. read/write test: >>>>>> >>>>>> dd if=/dev/zero of=/test bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s >>>>>> >>>>>> dd if=/test of=/dev/null bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s >>>>>> >>>>>> mfschukserver raid read/write local test (not thru mfs): >>>>>> dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s >>>>>> >>>>>> dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 >>>>>> 5024000+0 records in >>>>>> 5024000+0 records out >>>>>> 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s >>>>>> >>>>>> -- >>>>>> Best regards, >>>>>> Eugene >>>>>> >>>>>> >>>>>> >>>>>>> 18 мая 2017 г., в 10:51, Aleksander Wieliczko >>>>>>> <ale...@mo... >>>>>>> <mailto:ale...@mo...>> написал(а): >>>>>>> >>>>>>> Hi, >>>>>>> metadata operations like ls, mostly depends on MooseFS master >>>>>>> CPU speed and network latency. >>>>>>> >>>>>>> Would you be so kind and tell us something more about this >>>>>>> parameters? >>>>>>> >>>>>>> >>>>>>> Best regards >>>>>>> Aleksander Wieliczko >>>>>>> Technical Support Engineer >>>>>>> MooseFS.com >>>>>>> >>>>>>> On 18.05.2017 09:05, Ben Harker wrote: >>>>>>>> I've experienced weirdness when using various raid setups. It's >>>>>>>> stated in the documentation that a non raid setup is preferred >>>>>>>> - at least from what I've remembered. MFS is a fuse filesystem >>>>>>>> that's bound to have a performance penalty somewhere when it >>>>>>>> comes to huge numbers of small files due to it's distributed >>>>>>>> nature. >>>>>>>> >>>>>>>> I've mitigated this somewhat by introducing an SSD based tier >>>>>>>> of storage that holds files smaller than a certain size. This >>>>>>>> one's definitely helped, and you can also set it as the C >>>>>>>> (create) tier for when files get created if you get creative >>>>>>>> with your storage classes - the initial writes to SSD are much >>>>>>>> quicker and can make your setup feel much faster when writing >>>>>>>> files. >>>>>>>> >>>>>>>> Hope that's helpful! >>>>>>>> >>>>>>>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I have a hardware server with latest moosefs. HDD's are in >>>>>>>> a hardware RAID5. Debian 8. >>>>>>>> I have a lot of small files in folders (100k+). When I >>>>>>>> enter the folder I have a big delay while the system >>>>>>>> reading directory's files list. >>>>>>>> Using moosefs metrics I found that the speed of lookup >>>>>>>> operations is equal to 1,8k operations per minute at maximum. >>>>>>>> Why this could happen? I have another server with moosefs >>>>>>>> and the speed of lookup could be 300k per minute and higher. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best regards, >>>>>>>> Eugene >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Check out the vibrant tech community on one of the world's most >>>>>>>> engaging tech sites,Slashdot.org <http://slashdot.org/>!http://sdm.link/slashdot >>>>>>>> >>>>>>>> >>>>>>>> _________________________________________ >>>>>>>> moosefs-users mailing list >>>>>>>> moo...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>>>>> >>> >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org!http://sdm.link/slashdot >> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Alex C. <ac...@in...> - 2017-05-18 18:21:59
|
It doesn't matter how stable your 1GB bandwidth is if you have that kind of latency. There is no way you can expect to get the same lookup performance if your Master/Client/Chunkserver setup has that kind of geographical/topological separation. Alex On 18/05/17 18:26, Eugene Diatlov wrote: > Different DC, but connection is stable 1GB. > > -- > Best regards, > Eugene > > > > >> 18 мая 2017 г., в 19:30, David Myer <dav...@pr... >> <mailto:dav...@pr...>> написал(а): >> >> 38ms is very high latency - are the servers in the same datacenter? >> You should expect <1ms network latency if they are. >> >> Cheers, >> Dave >> >> >> Sent with ProtonMail <https://protonmail.com/> Secure Email. >> >>> -------- Original Message -------- >>> Subject: Re: [MooseFS-Users] slow lookup >>> Local Time: May 18, 2017 5:13 AM >>> UTC Time: May 18, 2017 9:13 AM >>> From: it...@da... <mailto:it...@da...> >>> To: Aleksander Wieliczko <ale...@mo... >>> <mailto:ale...@mo...>> >>> moo...@li... >>> <mailto:moo...@li...> >>> >>> Ping stat: >>> 124 packets transmitted, 124 received, 0% packet loss, time 123143ms >>> rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms >>> >>> I found an issue with FUSE on master server, I cannot mount mfs localy: >>> fuse: device not found, try 'modprobe fuse' first >>> error in fuse_mount >>> >>> modprobe fuse >>> modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, >>> or unknown parameter (see dmesg) >>> >>> dmesg | grep fuse >>> [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) >>> [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) >>> [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) >>> >>> I will try to find out what is the issue with FUSE. Can it cause my >>> lookup issue? >>> >>> >>> -- >>> Best regards, >>> Eugene >>> >>> >>> >>>> 18 мая 2017 г., в 12:01, Aleksander Wieliczko >>>> <ale...@mo... >>>> <mailto:ale...@mo...>> написал(а): >>>> >>>> Hi. >>>> Thank you for all this information. >>>> Hardware is really nice. >>>> >>>> Would you be so kind and execute this two tests in your environment? >>>> >>>> 1. Mount Moosefs Client on MooseFS master and execute time ls -l in >>>> folder with over 100k files >>>> 2. Execute ping command from "slow" MooseFS client to MooseFS master >>>> >>>> Best regards >>>> Aleksander Wieliczko >>>> Technical Support Engineer >>>> MooseFS.com >>>> >>>> On 18.05.2017 10:27, Eugene Diatlov wrote: >>>>> CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz >>>>> RAM 32 GB with ECC >>>>> 1 Gbit connection. >>>>> load average: 0.00, 0.01, 0.05 >>>>> Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 >>>>> (2016-10-19) x86_64 GNU/Linux >>>>> >>>>> It is a totally new empty server. If I copy 1 gig file to mfs it >>>>> tooks around 22 seconds, so it is couldn't be a network issue. >>>>> The problem only with lookup operation. >>>>> >>>>> mfs metadata files are placed on ssd system drive. read/write test: >>>>> >>>>> dd if=/dev/zero of=/test bs=1024 count=5024000 >>>>> 5024000+0 records in >>>>> 5024000+0 records out >>>>> 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s >>>>> >>>>> dd if=/test of=/dev/null bs=1024 count=5024000 >>>>> 5024000+0 records in >>>>> 5024000+0 records out >>>>> 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s >>>>> >>>>> mfschukserver raid read/write local test (not thru mfs): >>>>> dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 >>>>> 5024000+0 records in >>>>> 5024000+0 records out >>>>> 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s >>>>> >>>>> dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 >>>>> 5024000+0 records in >>>>> 5024000+0 records out >>>>> 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s >>>>> >>>>> -- >>>>> Best regards, >>>>> Eugene >>>>> >>>>> >>>>> >>>>>> 18 мая 2017 г., в 10:51, Aleksander Wieliczko >>>>>> <ale...@mo... >>>>>> <mailto:ale...@mo...>> написал(а): >>>>>> >>>>>> Hi, >>>>>> metadata operations like ls, mostly depends on MooseFS master CPU >>>>>> speed and network latency. >>>>>> >>>>>> Would you be so kind and tell us something more about this >>>>>> parameters? >>>>>> >>>>>> >>>>>> Best regards >>>>>> Aleksander Wieliczko >>>>>> Technical Support Engineer >>>>>> MooseFS.com >>>>>> >>>>>> On 18.05.2017 09:05, Ben Harker wrote: >>>>>>> I've experienced weirdness when using various raid setups. It's >>>>>>> stated in the documentation that a non raid setup is preferred - >>>>>>> at least from what I've remembered. MFS is a fuse filesystem >>>>>>> that's bound to have a performance penalty somewhere when it >>>>>>> comes to huge numbers of small files due to it's distributed nature. >>>>>>> >>>>>>> I've mitigated this somewhat by introducing an SSD based tier >>>>>>> of storage that holds files smaller than a certain size. This >>>>>>> one's definitely helped, and you can also set it as the C >>>>>>> (create) tier for when files get created if you get creative >>>>>>> with your storage classes - the initial writes to SSD are much >>>>>>> quicker and can make your setup feel much faster when writing >>>>>>> files. >>>>>>> >>>>>>> Hope that's helpful! >>>>>>> >>>>>>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I have a hardware server with latest moosefs. HDD's are in a >>>>>>> hardware RAID5. Debian 8. >>>>>>> I have a lot of small files in folders (100k+). When I enter >>>>>>> the folder I have a big delay while the system reading >>>>>>> directory's files list. >>>>>>> Using moosefs metrics I found that the speed of lookup >>>>>>> operations is equal to 1,8k operations per minute at maximum. >>>>>>> Why this could happen? I have another server with moosefs >>>>>>> and the speed of lookup could be 300k per minute and higher. >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best regards, >>>>>>> Eugene >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Check out the vibrant tech community on one of the world's most >>>>>>> engaging tech sites,Slashdot.org <http://slashdot.org/>!http://sdm.link/slashdot >>>>>>> >>>>>>> >>>>>>> _________________________________________ >>>>>>> moosefs-users mailing list >>>>>>> moo...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>>>>> >> > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Eugene D. <it...@da...> - 2017-05-18 17:26:19
|
Different DC, but connection is stable 1GB. -- Best regards, Eugene > 18 мая 2017 г., в 19:30, David Myer <dav...@pr...> написал(а): > > 38ms is very high latency - are the servers in the same datacenter? You should expect <1ms network latency if they are. > > Cheers, > Dave > > > Sent with ProtonMail <https://protonmail.com/> Secure Email. > >> -------- Original Message -------- >> Subject: Re: [MooseFS-Users] slow lookup >> Local Time: May 18, 2017 5:13 AM >> UTC Time: May 18, 2017 9:13 AM >> From: it...@da... >> To: Aleksander Wieliczko <ale...@mo...> >> moo...@li... >> >> Ping stat: >> 124 packets transmitted, 124 received, 0% packet loss, time 123143ms >> rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms >> >> I found an issue with FUSE on master server, I cannot mount mfs localy: >> fuse: device not found, try 'modprobe fuse' first >> error in fuse_mount >> >> modprobe fuse >> modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg) >> >> dmesg | grep fuse >> [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) >> [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) >> [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) >> >> I will try to find out what is the issue with FUSE. Can it cause my lookup issue? >> >> >> -- >> Best regards, >> Eugene >> >> >> >>> 18 мая 2017 г., в 12:01, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>> >>> Hi. >>> Thank you for all this information. >>> Hardware is really nice. >>> >>> Would you be so kind and execute this two tests in your environment? >>> >>> 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files >>> 2. Execute ping command from "slow" MooseFS client to MooseFS master >>> >>> Best regards >>> Aleksander Wieliczko >>> Technical Support Engineer >>> MooseFS.com <> >>> >>> On 18.05.2017 10:27, Eugene Diatlov wrote: >>>> CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz >>>> RAM 32 GB with ECC >>>> 1 Gbit connection. >>>> load average: 0.00, 0.01, 0.05 >>>> Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux >>>> >>>> It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. >>>> The problem only with lookup operation. >>>> >>>> mfs metadata files are placed on ssd system drive. read/write test: >>>> >>>> dd if=/dev/zero of=/test bs=1024 count=5024000 >>>> 5024000+0 records in >>>> 5024000+0 records out >>>> 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s >>>> >>>> dd if=/test of=/dev/null bs=1024 count=5024000 >>>> 5024000+0 records in >>>> 5024000+0 records out >>>> 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s >>>> >>>> mfschukserver raid read/write local test (not thru mfs): >>>> dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 >>>> 5024000+0 records in >>>> 5024000+0 records out >>>> 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s >>>> >>>> dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 >>>> 5024000+0 records in >>>> 5024000+0 records out >>>> 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s >>>> >>>> -- >>>> Best regards, >>>> Eugene >>>> >>>> >>>> >>>>> 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>>>> >>>>> Hi, >>>>> metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. >>>>> >>>>> Would you be so kind and tell us something more about this parameters? >>>>> >>>>> >>>>> Best regards >>>>> Aleksander Wieliczko >>>>> Technical Support Engineer >>>>> MooseFS.com <> >>>>> >>>>> On 18.05.2017 09:05, Ben Harker wrote: >>>>>> I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. >>>>>> >>>>>> I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. >>>>>> >>>>>> Hope that's helpful! >>>>>> >>>>>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> <mailto:it...@da...> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. >>>>>> I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. >>>>>> Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. >>>>>> Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best regards, >>>>>> Eugene >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Check out the vibrant tech community on one of the world's most >>>>>> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot <http://sdm.link/slashdot> >>>>>> >>>>>> >>>>>> _________________________________________ >>>>>> moosefs-users mailing list >>>>>> moo...@li... <mailto:moo...@li...> >>>>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>> > |
From: David M. <dav...@pr...> - 2017-05-18 16:30:23
|
38ms is very high latency - are the servers in the same datacenter? You should expect <1ms network latency if they are. Cheers, Dave Sent with [ProtonMail](https://protonmail.com) Secure Email. -------- Original Message -------- Subject: Re: [MooseFS-Users] slow lookup Local Time: May 18, 2017 5:13 AM UTC Time: May 18, 2017 9:13 AM From: it...@da... To: Aleksander Wieliczko <ale...@mo...> moo...@li... Ping stat: 124 packets transmitted, 124 received, 0% packet loss, time 123143ms rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms I found an issue with FUSE on master server, I cannot mount mfs localy: fuse: device not found, try 'modprobe fuse' first error in fuse_mount modprobe fuse modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg) dmesg | grep fuse [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) I will try to find out what is the issue with FUSE. Can it cause my lookup issue? -- Best regards, Eugene 18 мая 2017 г., в 12:01, Aleksander Wieliczko <ale...@mo...> написал(а): Hi. Thank you for all this information. Hardware is really nice. Would you be so kind and execute this two tests in your environment? 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files 2. Execute ping command from "slow" MooseFS client to MooseFS master Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 18.05.2017 10:27, Eugene Diatlov wrote: CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz RAM 32 GB with ECC 1 Gbit connection. load average: 0.00, 0.01, 0.05 Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. The problem only with lookup operation. mfs metadata files are placed on ssd system drive. read/write test: dd if=/dev/zero of=/test bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s dd if=/test of=/dev/null bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s mfschukserver raid read/write local test (not thru mfs): dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s -- Best regards, Eugene 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo...> написал(а): Hi, metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. Would you be so kind and tell us something more about this parameters? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 18.05.2017 09:05, Ben Harker wrote: I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. Hope that's helpful! On May 18, 2017 07:06, Eugene Diatlov [<it...@da...>](mailto:it...@da...) wrote: Hi, I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. -- Best regards, Eugene ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, [Slashdot.org](http://slashdot.org/) ! http://sdm.link/slashdot _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: <ma...@me...> - 2017-05-18 12:50:00
|
W dniu 18.05.2017 o 11:13, Eugene Diatlov pisze: > Ping stat: > 124 packets transmitted, 124 received, 0% packet loss, time 123143ms > rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms 38ms onn gigabit link? Something is broken in your network or master is connected through internet connection. |
From: Aleksander W. <ale...@mo...> - 2017-05-18 09:52:17
|
Hi, Basically to slow down internal re-balancing, you have to modify last two parameters in CHUNKS_(READ/WRITE)_REP_LIMIT in mfsmaster.cfg file. On chunkserver: HDD_REBALANCE_UTILIZATION and WOREKRS_MAX. By the way. We will try to add some extra parameter to avoid such a situation in the feature. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 17.05.2017 20:02, David Myer wrote: > Hi MooseFS users, > > I'm trying to reduce the impact of chunk server rebalancing after new > partitions were added. I have set some very conservative master and > chunkserver cfg parameters but I can only seem to get the linux server > loads down to around 2. > > http://i.imgur.com/cnomEaU.png > > Here are some params I have set to try to slow the master down: > > ACCEPTABLE_PERCENTAGE_DIFFERENCE = 10.0 > NICE_LEVEL = 19 > REPLICATIONS_DELAY_INIT = 600 > CHUNKS_LOOP_MAX_CPS = 1 > CHUNKS_LOOP_MIN_TIME = 3600 > CHUNKS_SOFT_DEL_LIMIT = 1 > CHUNKS_HARD_DEL_LIMIT = 2 > CHUNKS_WRITE_REP_LIMIT = 2,1,1,1 > CHUNKS_READ_REP_LIMIT = 10,1,1,1 > ATIME_MODE = 4 > CS_HEAVY_LOAD_THRESHOLD = 5 > CS_HEAVY_LOAD_RATIO_THRESHOLD = 1.0 > > Here are some params I have set to try to slow the chunkservers down: > > HDD_REBALANCE_UTILIZATION = 1 > NICE_LEVEL = 19 > WORKERS_MAX = 1 > WORKERS_MAX_IDLE = 1 > > The parameter changes have definitely helped, reducing the linux > server loads from 30 down to ~2. It is hard to say which parameters > have the most effect or if some parameters are even working at all > (such as CS_HEAVY_LOAD_*, which doesn't seem to be putting the chunk > servers into grace mode). > > From watching IO via iotop I think the main problem is how much IO > mfschunkserver still uses even when greatly restricted by the > aforementioned parameters. It would be great to throttle its > read/write speeds somehow. > > Any general ideas on mitigating the effect of big chunkserver work > while in production and under heavy use by mfsmount? I have also moved > all other services off the chunkserver machines. Could mfsmount be > turned off on the chunkservers and only used on the master? > > Any help would be greatly appreciated. > > Thank you, > Dave > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Eugene D. <it...@da...> - 2017-05-18 09:14:08
|
Ping stat: 124 packets transmitted, 124 received, 0% packet loss, time 123143ms rtt min/avg/max/mdev = 38.024/38.077/38.209/0.252 ms I found an issue with FUSE on master server, I cannot mount mfs localy: fuse: device not found, try 'modprobe fuse' first error in fuse_mount modprobe fuse modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg) dmesg | grep fuse [4248962.398993] fuse: Unknown symbol setattr_prepare (err 0) [4410205.380837] fuse: Unknown symbol setattr_prepare (err 0) [4410216.902707] fuse: Unknown symbol setattr_prepare (err 0) I will try to find out what is the issue with FUSE. Can it cause my lookup issue? -- Best regards, Eugene > 18 мая 2017 г., в 12:01, Aleksander Wieliczko <ale...@mo...> написал(а): > > Hi. > Thank you for all this information. > Hardware is really nice. > > Would you be so kind and execute this two tests in your environment? > > 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files > 2. Execute ping command from "slow" MooseFS client to MooseFS master > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <x-msg://4/moosefs.com> > > On 18.05.2017 10:27, Eugene Diatlov wrote: >> CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz >> RAM 32 GB with ECC >> 1 Gbit connection. >> load average: 0.00, 0.01, 0.05 >> Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux >> >> It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. >> The problem only with lookup operation. >> >> mfs metadata files are placed on ssd system drive. read/write test: >> >> dd if=/dev/zero of=/test bs=1024 count=5024000 >> 5024000+0 records in >> 5024000+0 records out >> 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s >> >> dd if=/test of=/dev/null bs=1024 count=5024000 >> 5024000+0 records in >> 5024000+0 records out >> 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s >> >> mfschukserver raid read/write local test (not thru mfs): >> dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 >> 5024000+0 records in >> 5024000+0 records out >> 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s >> >> dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 >> 5024000+0 records in >> 5024000+0 records out >> 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s >> >> -- >> Best regards, >> Eugene >> >> >> >>> 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo... <mailto:ale...@mo...>> написал(а): >>> >>> Hi, >>> metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. >>> >>> Would you be so kind and tell us something more about this parameters? >>> >>> >>> Best regards >>> Aleksander Wieliczko >>> Technical Support Engineer >>> MooseFS.com <x-msg://2/moosefs.com> >>> >>> On 18.05.2017 09:05, Ben Harker wrote: >>>> I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. >>>> >>>> I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. >>>> >>>> Hope that's helpful! >>>> >>>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> <mailto:it...@da...> wrote: >>>> Hi, >>>> >>>> I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. >>>> I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. >>>> Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. >>>> Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. >>>> >>>> >>>> -- >>>> Best regards, >>>> Eugene >>>> >>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Check out the vibrant tech community on one of the world's most >>>> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot <http://sdm.link/slashdot> >>>> >>>> _________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... <mailto:moo...@li...> >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>> >> > |
From: Aleksander W. <ale...@mo...> - 2017-05-18 09:01:22
|
Hi. Thank you for all this information. Hardware is really nice. Would you be so kind and execute this two tests in your environment? 1. Mount Moosefs Client on MooseFS master and execute time ls -l in folder with over 100k files 2. Execute ping command from "slow" MooseFS client to MooseFS master Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 18.05.2017 10:27, Eugene Diatlov wrote: > CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz > RAM 32 GB with ECC > 1 Gbit connection. > load average: 0.00, 0.01, 0.05 > Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) > x86_64 GNU/Linux > > It is a totally new empty server. If I copy 1 gig file to mfs it tooks > around 22 seconds, so it is couldn't be a network issue. > The problem only with lookup operation. > > mfs metadata files are placed on ssd system drive. read/write test: > > dd if=/dev/zero of=/test bs=1024 count=5024000 > 5024000+0 records in > 5024000+0 records out > 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s > > dd if=/test of=/dev/null bs=1024 count=5024000 > 5024000+0 records in > 5024000+0 records out > 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s > > mfschukserver raid read/write local test (not thru mfs): > dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 > 5024000+0 records in > 5024000+0 records out > 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s > > dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 > 5024000+0 records in > 5024000+0 records out > 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s > > -- > Best regards, > Eugene > > > >> 18 мая 2017 г., в 10:51, Aleksander Wieliczko >> <ale...@mo... >> <mailto:ale...@mo...>> написал(а): >> >> Hi, >> metadata operations like ls, mostly depends on MooseFS master CPU >> speed and network latency. >> >> Would you be so kind and tell us something more about this parameters? >> >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <x-msg://2/moosefs.com> >> >> On 18.05.2017 09:05, Ben Harker wrote: >>> I've experienced weirdness when using various raid setups. It's >>> stated in the documentation that a non raid setup is preferred - at >>> least from what I've remembered. MFS is a fuse filesystem that's >>> bound to have a performance penalty somewhere when it comes to huge >>> numbers of small files due to it's distributed nature. >>> >>> I've mitigated this somewhat by introducing an SSD based tier of >>> storage that holds files smaller than a certain size. This one's >>> definitely helped, and you can also set it as the C (create) tier >>> for when files get created if you get creative with your storage >>> classes - the initial writes to SSD are much quicker and can make >>> your setup feel much faster when writing files. >>> >>> Hope that's helpful! >>> >>> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> wrote: >>> >>> Hi, >>> >>> I have a hardware server with latest moosefs. HDD's are in a >>> hardware RAID5. Debian 8. >>> I have a lot of small files in folders (100k+). When I enter the >>> folder I have a big delay while the system reading directory's >>> files list. >>> Using moosefs metrics I found that the speed of lookup >>> operations is equal to 1,8k operations per minute at maximum. >>> Why this could happen? I have another server with moosefs and >>> the speed of lookup could be 300k per minute and higher. >>> >>> >>> -- >>> Best regards, >>> Eugene >>> >>> >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Check out the vibrant tech community on one of the world's most >>> engaging tech sites, Slashdot.org <http://Slashdot.org>! http://sdm.link/slashdot >>> >>> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > |
From: Eugene D. <it...@da...> - 2017-05-18 08:43:35
|
CPU Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz RAM 32 GB with ECC 1 Gbit connection. load average: 0.00, 0.01, 0.05 Linux mfs 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux It is a totally new empty server. If I copy 1 gig file to mfs it tooks around 22 seconds, so it is couldn't be a network issue. The problem only with lookup operation. mfs metadata files are placed on ssd system drive. read/write test: dd if=/dev/zero of=/test bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 19.4822 s, 264 MB/s dd if=/test of=/dev/null bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 1.56791 s, 3.3 GB/s mfschukserver raid read/write local test (not thru mfs): dd if=/dev/zero of=/mnt/data/test bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 6.66198 s, 772 MB/s dd if=/mnt/data/test of=/dev/null bs=1024 count=5024000 5024000+0 records in 5024000+0 records out 5144576000 bytes (5.1 GB) copied, 1.52796 s, 3.4 GB/s -- Best regards, Eugene > 18 мая 2017 г., в 10:51, Aleksander Wieliczko <ale...@mo...> написал(а): > > Hi, > metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. > > Would you be so kind and tell us something more about this parameters? > > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <x-msg://2/moosefs.com> > > On 18.05.2017 09:05, Ben Harker wrote: >> I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. >> >> I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. >> >> Hope that's helpful! >> >> On May 18, 2017 07:06, Eugene Diatlov <it...@da...> <mailto:it...@da...> wrote: >> Hi, >> >> I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. >> I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. >> Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. >> Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. >> >> >> -- >> Best regards, >> Eugene >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot <http://sdm.link/slashdot> >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> > |
From: Aleksander W. <ale...@mo...> - 2017-05-18 07:51:25
|
Hi, metadata operations like ls, mostly depends on MooseFS master CPU speed and network latency. Would you be so kind and tell us something more about this parameters? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 18.05.2017 09:05, Ben Harker wrote: > I've experienced weirdness when using various raid setups. It's stated > in the documentation that a non raid setup is preferred - at least > from what I've remembered. MFS is a fuse filesystem that's bound to > have a performance penalty somewhere when it comes to huge numbers of > small files due to it's distributed nature. > > I've mitigated this somewhat by introducing an SSD based tier of > storage that holds files smaller than a certain size. This one's > definitely helped, and you can also set it as the C (create) tier for > when files get created if you get creative with your storage classes - > the initial writes to SSD are much quicker and can make your setup > feel much faster when writing files. > > Hope that's helpful! > > On May 18, 2017 07:06, Eugene Diatlov <it...@da...> wrote: > > Hi, > > I have a hardware server with latest moosefs. HDD's are in a > hardware RAID5. Debian 8. > I have a lot of small files in folders (100k+). When I enter the > folder I have a big delay while the system reading directory's > files list. > Using moosefs metrics I found that the speed of lookup operations > is equal to 1,8k operations per minute at maximum. > Why this could happen? I have another server with moosefs and the > speed of lookup could be 300k per minute and higher. > > > -- > Best regards, > Eugene > > > > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ben H. <bj...@ba...> - 2017-05-18 07:06:08
|
I've experienced weirdness when using various raid setups. It's stated in the documentation that a non raid setup is preferred - at least from what I've remembered. MFS is a fuse filesystem that's bound to have a performance penalty somewhere when it comes to huge numbers of small files due to it's distributed nature. I've mitigated this somewhat by introducing an SSD based tier of storage that holds files smaller than a certain size. This one's definitely helped, and you can also set it as the C (create) tier for when files get created if you get creative with your storage classes - the initial writes to SSD are much quicker and can make your setup feel much faster when writing files. Hope that's helpful! On May 18, 2017 07:06, Eugene Diatlov <it...@da...> wrote: Hi, I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. -- Best regards, Eugene |
From: Eugene D. <it...@da...> - 2017-05-18 06:04:33
|
Hi, I have a hardware server with latest moosefs. HDD's are in a hardware RAID5. Debian 8. I have a lot of small files in folders (100k+). When I enter the folder I have a big delay while the system reading directory's files list. Using moosefs metrics I found that the speed of lookup operations is equal to 1,8k operations per minute at maximum. Why this could happen? I have another server with moosefs and the speed of lookup could be 300k per minute and higher. -- Best regards, Eugene |