You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bernd B. <b....@ho...> - 2016-12-14 09:31:21
|
Hello, I am using mfs-pro to store virtual maschines in a Proxmox Cluster Worked perfekt until client 3.0.84. When i upgraded, i had a massive increase in I/O Delays on my servers. After restart everything worked fine Then i read this threat an i downgraded my clients to 3.0.81 an that solved the problem. As i do not see anything new in the mail list on that item i would like to know if 3.0.86 solves this problem now ? Mit freundlichen Grüßen Bernd Burg ------------------------------------------------------------ HOLDE AG Zum Roppertsborn 14, 66646 Marpingen Telefon +49-6827-267988-0 Telefax +49-6827-267988-9 Email b....@ho... <mailto:kr...@ac...> Sitz der Gesellschaft: Marpingen AG Saarbrücken, HRB 101630 Ust-Id-Nr.: DE294620253 Vorstand: Dipl.-Ing. Bernd Burg Aufsichtsrat: Dipl.-Ing. Axel Gaus (Vorsitz) Dipl.-Ing. Andreas Krolzig Dipl.-Ing. Gabor Richter ------------------------------------------------------------ Am 03.11.2016 um 18:09 schrieb Piotr Robert Konopelko: > Great, > > thanks for the update! > > > Best regards, > Peter > |
From: Aleksander W. <ale...@mo...> - 2016-12-14 06:24:47
|
Hi Steve, Thank you for all this information. Please update us in case of any problems with systemd automount option. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12/13/2016 02:58 PM, Wilson, Steven M wrote: > > Hi Aleksander, > > > I started using autofs because I wanted to only mount MooseFS file > systems as they were needed and then have them unmounted when they are > no longer in use. That helps free up some CPU and memory resources > when file systems are not in use (not much, though) but it also helps > make it easier to update the MooseFS client to a newer version. We've > used autofs for some NFS mounts for many years (and amd before > that...) so it was only natural to incorporate MooseFS into our autofs > environment. > > > Thanks for the pointer to systemd automount. I didn't realize that it > also supported an unmount option after a pre-determined idle timeout. > I'll look into perhaps using that in the future. > > > Best regards, > > Steve > > > ------------------------------------------------------------------------ > *From:* Aleksander Wieliczko <ale...@mo...> > *Sent:* Tuesday, December 13, 2016 1:54 AM > *To:* Wilson, Steven M; MooseFS-Users > *Subject:* Re: [MooseFS-Users] Caution re: MooseFS mounting from > automount in CentOS 7 > > Hi Steve, > Thank you for all this information. > Basically we never used MooseFS client with autofs. > > Can you tell us what is the real goal of usingautofs with mfsmount? > Is the mfsmount entry in fstab not sufficient in your case? > > More information about fstab you can find on mfsmount man page. > https://moosefs.com/manpages/mfsmount.html > > By the way, you can try to use native systemd x-systemd.automount feature. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> > > > On 12/12/2016 07:01 PM, Wilson, Steven M wrote: >> >> Hi, >> >> >> I ran into an interesting problem on a small subset of our systems >> that were running CentOS 7. I've begun using autofs for mounting >> MooseFS filesystems on an as-needed basis and I found that our users >> were unable to mount and use any MooseFS filesystems from CentOS 7 >> workstations. ps showed the following: >> >> root 12550 1.9 1.0 520304 82260 ? S<sl 12:35 0:00 >> /sbin/mount.moosefs sb-data:/apps /net/apps -n -o rw,mfsbind=10.145.7.31 >> Note the "-n" option which is an mfsmount option that will "omit >> default mount options (-o allow_other)". This was obviously the >> cause of my problem but where was it coming from? My automount map >> file for this filesystem is the following with no explicit "-n" option: >> apps \ >> -fstype=moosefs,mfsbind=10.145.7.31 \ >> sb-data:/apps >> >> Turning on debug logging for automount, I get this message in the >> logs: >> Dec 12 11:21:59 gateway automount[7146]: spawn_umount: mtab link >> detected, passing -n to mount >> >> So, if /etc/mtab is a link the automount will pass a "-n" option to >> the mount command. The general "-n" option for mount is to "mount >> without writing in /etc/mtab". This general option gets passed down >> to mount.moosefs which interprets it quite differently! >> >> An easy, but not so elegant, work-around is to change the map file >> entry to include an "allow_other" option like this: >> apps \ >> -fstype=moosefs,allow_other,mfsbind=10.145.7.31 \ >> sb-data:/apps >> >> I don't understand why this only causes problems with CentOS 7 and >> not on my Ubuntu 16.04 installations which also have /etc/mtab set up >> as a symbolic link to /proc/self/mounts. Ubuntu is using a different >> version of automount (5.1.1 compared to 5.0.7) but it looks like it's >> still checking for the mtab symbolic link: >> >> root@otter:~# strings /usr/sbin/automount | grep 'mtab link detected' >> %s: mtab link detected, passing -n to mount >> >> Hopefully this will prove helpful to anyone else who might run into >> the same issue. And if someone has a better way to deal with it, let >> me know. >> >> >> Regards, >> >> Steve >> > |
From: Wilson, S. M <st...@pu...> - 2016-12-13 13:58:53
|
Hi Aleksander, I started using autofs because I wanted to only mount MooseFS file systems as they were needed and then have them unmounted when they are no longer in use. That helps free up some CPU and memory resources when file systems are not in use (not much, though) but it also helps make it easier to update the MooseFS client to a newer version. We've used autofs for some NFS mounts for many years (and amd before that...) so it was only natural to incorporate MooseFS into our autofs environment. Thanks for the pointer to systemd automount. I didn't realize that it also supported an unmount option after a pre-determined idle timeout. I'll look into perhaps using that in the future. Best regards, Steve ________________________________ From: Aleksander Wieliczko <ale...@mo...> Sent: Tuesday, December 13, 2016 1:54 AM To: Wilson, Steven M; MooseFS-Users Subject: Re: [MooseFS-Users] Caution re: MooseFS mounting from automount in CentOS 7 Hi Steve, Thank you for all this information. Basically we never used MooseFS client with autofs. Can you tell us what is the real goal of using autofs with mfsmount? Is the mfsmount entry in fstab not sufficient in your case? More information about fstab you can find on mfsmount man page. https://moosefs.com/manpages/mfsmount.html By the way, you can try to use native systemd x-systemd.automount feature. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com<moosefs.com> On 12/12/2016 07:01 PM, Wilson, Steven M wrote: Hi, I ran into an interesting problem on a small subset of our systems that were running CentOS 7. I've begun using autofs for mounting MooseFS filesystems on an as-needed basis and I found that our users were unable to mount and use any MooseFS filesystems from CentOS 7 workstations. ps showed the following: ? root 12550 1.9 1.0 520304 82260 ? S<sl 12:35 0:00 /sbin/mount.moosefs sb-data:/apps /net/apps -n -o rw,mfsbind=10.145.7.31 Note the "-n" option which is an mfsmount option that will "omit default mount options (-o allow_other)". This was obviously the cause of my problem but where was it coming from? My automount map file for this filesystem is the following with no explicit "-n" option: apps \ -fstype=moosefs,mfsbind=10.145.7.31 \ sb-data:/apps Turning on debug logging for automount, I get this message in the logs: Dec 12 11:21:59 gateway automount[7146]: spawn_umount: mtab link detected, passing -n to mount So, if /etc/mtab is a link the automount will pass a "-n" option to the mount command. The general "-n" option for mount is to "mount without writing in /etc/mtab". This general option gets passed down to mount.moosefs which interprets it quite differently! An easy, but not so elegant, work-around is to change the map file entry to include an "allow_other" option like this: apps \ -fstype=moosefs,allow_other,mfsbind=10.145.7.31 \ sb-data:/apps I don't understand why this only causes problems with CentOS 7 and not on my Ubuntu 16.04 installations which also have /etc/mtab set up as a symbolic link to /proc/self/mounts. Ubuntu is using a different version of automount (5.1.1 compared to 5.0.7) but it looks like it's still checking for the mtab symbolic link: root@otter:~# strings /usr/sbin/automount | grep 'mtab link detected' %s: mtab link detected, passing -n to mount Hopefully this will prove helpful to anyone else who might run into the same issue. And if someone has a better way to deal with it, let me know. Regards, Steve |
From: Aleksander W. <ale...@mo...> - 2016-12-13 06:54:39
|
Hi Steve, Thank you for all this information. Basically we never used MooseFS client with autofs. Can you tell us what is the real goal of usingautofs with mfsmount? Is the mfsmount entry in fstab not sufficient in your case? More information about fstab you can find on mfsmount man page. https://moosefs.com/manpages/mfsmount.html By the way, you can try to use native systemd x-systemd.automount feature. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12/12/2016 07:01 PM, Wilson, Steven M wrote: > > Hi, > > > I ran into an interesting problem on a small subset of our systems > that were running CentOS 7. I've begun using autofs for mounting > MooseFS filesystems on an as-needed basis and I found that our users > were unable to mount and use any MooseFS filesystems from CentOS 7 > workstations. ps showed the following: > > root 12550 1.9 1.0 520304 82260 ? S<sl 12:35 0:00 > /sbin/mount.moosefs sb-data:/apps /net/apps -n -o rw,mfsbind=10.145.7.31 > Note the "-n" option which is an mfsmount option that will "omit > default mount options (-o allow_other)". This was obviously the cause > of my problem but where was it coming from? My automount map file for > this filesystem is the following with no explicit "-n" option: > apps \ > -fstype=moosefs,mfsbind=10.145.7.31 \ > sb-data:/apps > > Turning on debug logging for automount, I get this message in the > logs: > Dec 12 11:21:59 gateway automount[7146]: spawn_umount: mtab link > detected, passing -n to mount > > So, if /etc/mtab is a link the automount will pass a "-n" option to > the mount command. The general "-n" option for mount is to "mount > without writing in /etc/mtab". This general option gets passed down > to mount.moosefs which interprets it quite differently! > > An easy, but not so elegant, work-around is to change the map file > entry to include an "allow_other" option like this: > apps \ > -fstype=moosefs,allow_other,mfsbind=10.145.7.31 \ > sb-data:/apps > > I don't understand why this only causes problems with CentOS 7 and not > on my Ubuntu 16.04 installations which also have /etc/mtab set up as a > symbolic link to /proc/self/mounts. Ubuntu is using a different > version of automount (5.1.1 compared to 5.0.7) but it looks like it's > still checking for the mtab symbolic link: > > root@otter:~# strings /usr/sbin/automount | grep 'mtab link detected' > %s: mtab link detected, passing -n to mount > > Hopefully this will prove helpful to anyone else who might run into > the same issue. And if someone has a better way to deal with it, let > me know. > > > Regards, > > Steve > |
From: Wilson, S. M <st...@pu...> - 2016-12-12 18:01:40
|
Hi, I ran into an interesting problem on a small subset of our systems that were running CentOS 7. I've begun using autofs for mounting MooseFS filesystems on an as-needed basis and I found that our users were unable to mount and use any MooseFS filesystems from CentOS 7 workstations. ps showed the following: ? root 12550 1.9 1.0 520304 82260 ? S<sl 12:35 0:00 /sbin/mount.moosefs sb-data:/apps /net/apps -n -o rw,mfsbind=10.145.7.31 Note the "-n" option which is an mfsmount option that will "omit default mount options (-o allow_other)". This was obviously the cause of my problem but where was it coming from? My automount map file for this filesystem is the following with no explicit "-n" option: apps \ -fstype=moosefs,mfsbind=10.145.7.31 \ sb-data:/apps Turning on debug logging for automount, I get this message in the logs: Dec 12 11:21:59 gateway automount[7146]: spawn_umount: mtab link detected, passing -n to mount So, if /etc/mtab is a link the automount will pass a "-n" option to the mount command. The general "-n" option for mount is to "mount without writing in /etc/mtab". This general option gets passed down to mount.moosefs which interprets it quite differently! An easy, but not so elegant, work-around is to change the map file entry to include an "allow_other" option like this: apps \ -fstype=moosefs,allow_other,mfsbind=10.145.7.31 \ sb-data:/apps I don't understand why this only causes problems with CentOS 7 and not on my Ubuntu 16.04 installations which also have /etc/mtab set up as a symbolic link to /proc/self/mounts. Ubuntu is using a different version of automount (5.1.1 compared to 5.0.7) but it looks like it's still checking for the mtab symbolic link: root@otter:~# strings /usr/sbin/automount | grep 'mtab link detected' %s: mtab link detected, passing -n to mount Hopefully this will prove helpful to anyone else who might run into the same issue. And if someone has a better way to deal with it, let me know. Regards, Steve |
From: Aleksander W. <ale...@mo...> - 2016-12-07 14:05:08
|
Hi. Basically segfault is pointing problem in libc-2.11.1.so. I see that you are using old Ubuntu distribution as MooseFS - SAMBA bridge. Please consider to update your MFS - SAMBA bridge OS and MooseFS cluster. MooseFS team released MooseFS 3.0.86 version lately and Ubuntu released 16.04 OS version. Many improvements ware made in both products. By the way. Please consider to use XFS as file system for MosoeFS chunkservers. XFS better handle delete operations and is faster than EXT4 in case of 90% hard disk usage. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12/07/2016 02:09 PM, Petrus Rossouw wrote: > Error: mfsmount[3830]: segfault at a200f004 ip 00590a31 sp a3cf6f38 error > 4 in libc-2.11.1.so[51e000+15c000] > > Client OS: Ubuntu 10.04.4 LTS > MFS Version: 2.0.89 > Mount point: /mnt/mfs > > Samba share: > [data] > path = /mnt/mfs/fileserver/data > read only = No > create mask = 0777 > directory mask = 0777 > > Master OS: Debian GNU/Linux 7.11 (wheezy) > Chunk OS(7 chunk servers): Debian GNU/Linux 7.11 (wheezy) > Chunk server file systems: ext3, ext4 > > Details: > > Client shares folder from moosefs using samba > Heavy activity over samba from windows clients over long period causes > segfault > In this instance, windows client copies several GB of data from the samba > share > > The share was recently moved over from a local drive to moosefs. > > > > > > > Namaqua Wines Disclaimer > > This Email, and any attachments, may contain Protected or Restricted information and > is intended solely for the individual to whom it is addressed. It may contain sensitive or protectively > marked material and should be handled accordingly. If this Email has been misdirected, please notify the > author immediately. If you are not the intended recipient you must not disclose, distribute, copy, print or > rely on any of the information contained in it or attached, and all copies must be deleted immediately. > Whilst we take reasonable steps to try to identify any software viruses, any attachments to this Email may > nevertheless contain viruses which our anti-virus software has failed to identify. You should therefore carry > out your own anti-virus checks before opening any documents. Namaqua Wines will not accept any liability for > damage caused by computer viruses emanating from any attachment or other document supplied with this e-mail. > All traffic may be subject to recording and / > or monitoring in accordance with relevant legislation. > > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today.http://sdm.link/xeonphi > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Petrus R. <pe...@na...> - 2016-12-07 13:25:43
|
Error: mfsmount[3830]: segfault at a200f004 ip 00590a31 sp a3cf6f38 error 4 in libc-2.11.1.so[51e000+15c000] Client OS: Ubuntu 10.04.4 LTS MFS Version: 2.0.89 Mount point: /mnt/mfs Samba share: [data] path = /mnt/mfs/fileserver/data read only = No create mask = 0777 directory mask = 0777 Master OS: Debian GNU/Linux 7.11 (wheezy) Chunk OS(7 chunk servers): Debian GNU/Linux 7.11 (wheezy) Chunk server file systems: ext3, ext4 Details: Client shares folder from moosefs using samba Heavy activity over samba from windows clients over long period causes segfault In this instance, windows client copies several GB of data from the samba share The share was recently moved over from a local drive to moosefs. Namaqua Wines Disclaimer This Email, and any attachments, may contain Protected or Restricted information and is intended solely for the individual to whom it is addressed. It may contain sensitive or protectively marked material and should be handled accordingly. If this Email has been misdirected, please notify the author immediately. If you are not the intended recipient you must not disclose, distribute, copy, print or rely on any of the information contained in it or attached, and all copies must be deleted immediately. Whilst we take reasonable steps to try to identify any software viruses, any attachments to this Email may nevertheless contain viruses which our anti-virus software has failed to identify. You should therefore carry out your own anti-virus checks before opening any documents. Namaqua Wines will not accept any liability for damage caused by computer viruses emanating from any attachment or other document supplied with this e-mail. All traffic may be subject to recording and / or monitoring in accordance with relevant legislation. |
From: Aleksander W. <ale...@mo...> - 2016-12-06 13:14:14
|
Hi, First of all we have to precise if we are talking about files or directories. All operations on folders (metadata operations) are synchronized on the master level. Operations on files are more problematic. Especially in distributed file systems. If you like to write data from many clients to one file (this is really rare case) good idea is to think about using file locks. For example fcntl, lockf or flock (all of them are implemented in MooseFS, but be aware that flocks are implemented since fuse 2.9, so only newest kernels support it). Without locks there is no way to be sure in which order writes are performed internally - there is only guarantee that at the same time, only one client can write to one chunk of data (64MB). Each chunk modification also invalidates caches in other clients. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 11/28/2016 11:15 AM, Winters.Hua wrote: > Dear MFS experts, > > I’m using the MFS 3.0.81(CentOS 6u3) for some test and I have one > questions about the concurrent read & write. > In the user manual, I remember that all read operations are > concurrent and write is also but if over on a same file fragment will > be sequential, controlled by MFS server. > > So, if I have several Clients, like Client A, Client B, there are > possible to upload same folders(folder M) to MFS Cache both from > Client A and Client B at same time. > I think, this write operation will happen on same file fragment and > which should be sequential and synchronized by MFS server, right ? > Also, at same time, there’s Client C and Client D, which > possibly want to read the same folder M at same time. > For this case, how the MFS works to make sure both read & write work > as expected ? Or Should I do some control in the Client side ? > > > Appreciate for your helps and thanks. > Regards, > Xinghua Gao |
From: Aleksander W. <ale...@mo...> - 2016-12-06 06:44:05
|
Hi, Basically there is no issue. You can use MooseFS Master 3.0.86 with MooseFS chunkserver 3.0.82 and MooseFS client 3.0.86, but _*never*_ use older master and newer chunkservers and clients. Chunkserver in latest version will create chunks with new 8k header. When you downgrade chunkserver from 3.0.83 to older version, chunkserver will not recognize chunks with 8k header and mark them as broken. From the other side, why you want to use such a mixed configuration? We always suggest to use the same software version(latest) in whole MooseFS Cluster. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12/05/2016 05:43 PM, web user wrote: > I have a new user connecting to my moosefs master from macosx. While > getting his env setup I saw the following error message: > > If you already upgraded your Chunkservers to v. 3.0.83 or higher, > please DO NOTdowngrade them! > > In MooseFS Chunkserver v. 3.0.83 we changed Chunk header from 5k to 8k > (seechangelog > <https://moosefs.com/documentation/changes-in-moosefs-3-0.html>) - it > means, that Chunkserver older than 3.0.83 cannot "understand" new > Chunk header, which may lead to potential data loss! > > > Does that only apply to the chunk servers. Is there any issue with him > connecting with mfs client (> 3.83) to a mfs chunk server (<3.83)? > |
From: web u. <web...@gm...> - 2016-12-05 16:43:45
|
I have a new user connecting to my moosefs master from macosx. While getting his env setup I saw the following error message: If you already upgraded your Chunkservers to v. 3.0.83 or higher, please DO NOTdowngrade them! In MooseFS Chunkserver v. 3.0.83 we changed Chunk header from 5k to 8k (see changelog <https://moosefs.com/documentation/changes-in-moosefs-3-0.html>) - it means, that Chunkserver older than 3.0.83 cannot "understand" new Chunk header, which may lead to potential data loss! Does that only apply to the chunk servers. Is there any issue with him connecting with mfs client (> 3.83) to a mfs chunk server (<3.83) |
From: Piotr R. K. <pio...@mo...> - 2016-12-02 18:54:53
|
Dear MooseFS Users, we released the newest version MooseFS 3.0 recently: MooseFS 3.0.86. This version mainly contains bugfixes and improves stability. We strongly recommend to upgrade to the latest version, but please remember, that If you had version lower than 3.0.83 and then you upgraded your Chunkservers to v. 3.0.83 or higher, please DO NOT downgrade them! In MooseFS Chunkserver v. 3.0.83 we changed Chunk header from 5k to 8k (see changelog) - this is one of major changes and it means, that Chunkserver older than 3.0.83 cannot "understand" new Chunk header, which may lead to potential data loss! If you like MooseFS, please support it by starring it on GitHub <https://github.com/moosefs/moosefs>. Thanks! Please find the changes in MooseFS 3 since 3.0.77 below: MooseFS 3.0.86-1 (2016-11-30) (master) fixed leader-follower resynchronization after reconnection (pro only) (master) fixed rehashing condition in edge/node/chunk hashmaps (change inspired by yinjiawind) MooseFS 3.0.85-1 (2016-11-17) (mount) fixed memory leak (also efficiency) on Linux and potential segfaults on FreeBSD (negative condition) (mount) fixed race condition for inode value in write module (mount) better descriptors handling (lists of free elements) (mount) better releasing descriptors on FreeBSD (cs) fixed time condition (patch send by yinjiawind) MooseFS 3.0.84-1 (2016-10-06) (master) fixed setting acl-default without named users or named groups (master) fixed master-follower synchronization after setting storage class MooseFS 3.0.83-1 (2016-09-30) (cs) changed header size from 5k to 8k (due to 4k-sector hard disks) MooseFS 3.0.82-1 (2016-09-28) (all) silenced message about using deprecated parameter 'oom_adj' (mount) fixed FreeBSD delayed release mechanism (mount) added rwlock for chunks MooseFS 3.0.81-1 (2016-07-25) (mount) fixed oom killer disabling (setting oom_adj and oom_score_adj) (cli) fixed displaying inactive mounts (mount) added fsync before removing any locks (daemons) added disabling oom killer (Linux only) (all) small fixes in manpages (mount) fixed handling nonblocking lock commands (unlock and try) in both locking mechanisms (daemons+mount) changed default settings for limiting malloc arenas (Linux only) MooseFS 3.0.80-1 (2016-07-13) (master) fixed chunk loop (in some cases chunks from the last hash position might be left unchecked) (master) fixed storage class management (fixed has_***_labels fields) MooseFS 3.0.79-1 (2016-07-05) (master) fixed 'flock' (change type of lock SH->EX and EX->SH caused access to freed memory and usually SEGV) MooseFS 3.0.78-1 (2016-06-14) (cs) fixed serious error that may cause data corruption during internal rebalance (intr. in 3.0.75) Best regards, Peter -- <https://moosefs.com/>Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail: pio...@mo... <mailto:pio...@mo...> www: https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 08 Jun 2016, at 1:04 AM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Dear MooseFS Users, > > we released the newest versions of both MooseFS 3.0 and MooseFS 2.0 recently: 3.0.77 and 2.0.89. > > They improve MooseFS stability and (in MooseFS 3.0) also add new features, like: > Storage Classes <https://moosefs.com/documentation/moosefs-3-0.html> > Possibility to mount MooseFS on Linux by issuing: mount -t moosefs mfsmaster.example.lan: /mnt/mfs > File sparsification > Automatic temporary maintenance mode > New I/O synchronisation between different mounts > If you like MooseFS, please support it by starring it on GitHub <https://github.com/moosefs/moosefs>. Thanks! > > > Please find the changes in MooseFS 3 since 3.0.73 below: > MooseFS 3.0.77-1 (2016-06-07) > (mount) added assertions after packet allocation in master communication module > (manpages) fixes related to storage classes > (all) improved error messages used for storage classes > > MooseFS 3.0.76-1 (2016-06-03) > (master) fixed resolving multi path for root inode > (master) fixed licence expiration behaviour (pro only) > > MooseFS 3.0.75-1 (2016-04-20) > (all) fixed cppcheck warnings/errors (mostly false negatives) > (cs) added file sparsification during chunk write (also chunk replication, chunk duplication etc.) > (mount) fixed write data inefficiency when I/O was performed by root > (master) fixed removing too early locked chunks from data structures > (tools) fixed reusing local ports (connection to master returned EADDRNOTAVAIL) > (all) changed default action from restart to start > (all) added exiting when user defined config (option -c) cannot be loaded > (cs) changed handling chunk filename (dynamic filename generation - cs should use less memory) > (netdump) changed 'moose ports only' option to 'port range' > (mount) limited number of files kept as open after close > (cs) changed subfolder choosing algorithm > (mount) changed mutex lock to rw-lock for I/O on the same descriptor > (mount) added link to mfsmount from '/sbin/mount.moosefs' (Linux only) > (all) introduced "storage classes" (new goal/labels management) > (master+cs) introduced 'temporary maintenance mode' (automatic maintenance mode after graceful stop of cs) > (master+mount) added fix for bug in FreeBSD kernel (kernel sends truncate before first close - FreeBSD only) > (cs) fixed setting malloc pools on Linux > > MooseFS 3.0.74-1 (2016-03-08) > (master) fixed rebalance replication (check for all chunk copies for destination - not only valid) > (master+mount) new mechanism for atime+mtime setting during I/O > (master+mount) new I/O synchronization between different mounts (with cache invalidation) > (master+mount) new chunk number/version cache (with automatic invalidation from master) > (master) added mapping chunkserver IP classes (allow to have separate network for I/O and separate for other activity) > (master) fixed status returned by writechunk after network down/up > (master) changed trashtime from seconds to hours > (master) added METADATA_SAVE_FREQ option (allow to save metadata less frequently than every hour) > (master) added using in emergency (endangered chunks) all available servers for replication (even overloaded and being maintained) > (master) added using chunkservers in 'internal rebalance' state in case of deleting chunks > (all) spell check errors fixed (patch contributed by Dmitry Smirnov) > > Please find the changes in MooseFS 2.0 since 2.0.88 below: > MooseFS 2.0.89-1 (2016-04-27) > (master+mount) added fix for bug in FreeBSD kernel (kernel sends truncate before first close - FreeBSD only) > > Best regards, > > -- > > <Mail Attachment.png> <https://moosefs.com/> > > Piotr Robert Konopelko > MooseFS Technical Support Engineer > e-mail : pio...@mo... <mailto:pio...@mo...> > www : https://moosefs.com <https://moosefs.com/> > > <Mail Attachment.png> <https://twitter.com/MooseFS><Mail Attachment.png> <https://www.facebook.com/moosefs><Mail Attachment.png> <https://www.linkedin.com/company/moosefs><Mail Attachment.png> <https://github.com/moosefs> > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > > >> On 08 Mar 2016, at 9:24 AM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: >> >> Dear MooseFS Users, >> >> we released the newest versions of both MooseFS 3 and MooseFS 2 recently: 3.0.73 and 2.0.88. >> >> Please find the changes in MooseFS 3 since 3.0.71 below: >> >> MooseFS 3.0.73-1 (2016-02-11) >> (master) fixed restoring ARCHCHG from changelog >> (cli+cgi) fixed displaying master list with followers only (pro version only) >> (master) added using size and length quota to fix disk usage values (statfs) >> (master) fixed xattr bug which may lead to data corruption and segfaults (intr. in 2.1.1) >> (master) added 'node_info' packet >> (tools) added '-p' option to 'mfsdirinfo' - 'precise mode' >> (master) fixed edge renumeration >> (master) added detecting of wrong edge numbers and force renumeration in such case >> >> MooseFS 3.0.72-1 (2016-02-04) >> (master+cgi+cli) added global 'umask' option to exports.cfg >> (all) changed address of FSF in GPL licence text >> (debian) removed obsolete conffiles >> (debian) fixed copyright file >> (mount) fixed parsing mfsmount.cfg (system options like nodev,noexec etc. were ommited) >> (master) changed way how cs internal rebalance state is treated by master (as 'grace' state instead of 'overloaded') >> (mount) fixed bug in read module (setting etab after ranges realloc) >> (tools) removed obsoleted command 'mfssnapshot' >> >> >> Please find the changes in MooseFS 2 since 2.0.83 below: >> >> MooseFS 2.0.88-1 (2016-03-02) >> (master) added METADATA_SAVE_FREQ option (allow to save metadata less frequently than every hour) >> >> MooseFS 2.0.87-1 (2016-02-23) >> (master) fixed status returned by writechunk after network down/up >> >> MooseFS 2.0.86-1 (2016-02-22) >> (master) fixed initialization of ATIME_MODE >> >> MooseFS 2.0.85-1 (2016-02-11) >> (master) added ATIME_MODE option to set atime modification behaviour >> (master) added using size and length quota to fix disk usage values (statfs) >> (all) changed address of FSF in GPL licence text >> (debian) removed obsolete conffiles >> (debian) fixed copyright file >> (mount) fixed parsing mfsmount.cfg (system options like nodev,noexec etc. were ommited) >> (tools) removed obsoleted command 'mfssnapshot' >> >> MooseFS 2.0.84-1 (2016-01-19) >> (mount) fixed setting file length in write module during truncate (fixes "git svn" case) >> >> Best regards, >> >> -- >> >> <Mail Attachment.png> <https://moosefs.com/> >> >> Piotr Robert Konopelko >> MooseFS Technical Support Engineer >> e-mail : pio...@mo... <mailto:pio...@mo...> >> www : https://moosefs.com <https://moosefs.com/> >> >> <Mail Attachment.png> <https://twitter.com/MooseFS><Mail Attachment.png> <https://www.facebook.com/moosefs><Mail Attachment.png> <https://www.linkedin.com/company/moosefs><Mail Attachment.png> <https://github.com/moosefs> >> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. >> >> >>> On 27 Jan 2016, at 5:51 PM, Piotr Robert Konopelko <pio...@mo... <mailto:pio...@mo...>> wrote: >>> >>> Dear MooseFS Users, >>> >>> today we published the newest version from 3.x branch: 3.0.71. >>> >>> Please find the changes since 3.0.69 below: >>> >>> MooseFS 3.0.71-1 (2016-01-21) >>> (master) fixed emptying trash issue (intr. in 3.0.64) >>> (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) >>> (master) changed trash part choice from nondeterministic to deterministic >>> >>> MooseFS 3.0.70-1 (2016-01-19) >>> (cgi+cli) fixed displaying info when there are no active masters (intr. in 3.0.67) >>> (mount+common) refactoring code to be Windows ready >>> (mount) added option 'mfsflattrash' (makes trash look like before version 3.0.64) >>> (mount) added fixes for NetBSD (patch contributed by Tom Ivar Helbekkmo) >>> >>> Best regards, >>> >>> -- >>> Piotr Robert Konopelko >>> MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> >> >> > > |
From: Winters.H. <hua...@qq...> - 2016-11-28 10:16:04
|
Dear MFS experts, I’m using the MFS 3.0.81(CentOS 6u3) for some test and I have one questions about the concurrent read & write. In the user manual, I remember that all read operations are concurrent and write is also but if over on a same file fragment will be sequential, controlled by MFS server. So, if I have several Clients, like Client A, Client B, there are possible to upload same folders(folder M) to MFS Cache both from Client A and Client B at same time. I think, this write operation will happen on same file fragment and which should be sequential and synchronized by MFS server, right ? Also, at same time, there’s Client C and Client D, which possibly want to read the same folder M at same time. For this case, how the MFS works to make sure both read & write work as expected ? Or Should I do some control in the Client side ? Appreciate for your helps and thanks. Regards, Xinghua Gao |
From: Alex C. <ac...@in...> - 2016-11-15 21:59:39
|
On 15/11/16 17:08, Gandalf Corvotempesta wrote: > Hi, > let's assume a master crash, in a cluster with hundred millions of > file (many, many terabytes) > How long does it take for the metalogger to recover and being promoted > as new master? > During the whole recovery procedure I/O is blocked? > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users Metaloggers don't get promoted. Only Follower Masters (Pro only). If you have a metalogger you can convert it manually to a master or write some complicated Linux-HA stuff to accomplish this. LizardFS claims to do this for "free beer" but the HA solution is still based on services external to the the master server. I will tell you that LizardFS looked like a great proposition from us until we realised this. There have also been a few complaints on the LizardFS gitHub about drops in performance over the last few months that users have said have not happened in MFS. In my experience a Master follower takes about 20 seconds max to take over the MD master role in the Pro version. During that time I/O will only be blocked to files created since the Master died. If you have that many files, and need the uptime, again I'd suggest stumping up the cash. It's way cheaper than anything else we found that could do the same job. The only alternatives I can offer you are RozoFS (didn't test much, but it has very specific topology/number of host requirements, only erasure coding, so can't split between two nearby Datacernters and guarantee HA if one DC dies) and BeegFS (fast but again no auto-HA without doing it yourself - this is planned very soon though). NB BeeGFS asks for a commercial support license for HA features, ACLs and xattrs, which just sucks a bit. Believe me, I've gone through exactly what you have. GlusterFS isn't worth a penny if you want stabillity, and the big storage guys will charge you at least high-5 or even 6 figures for even a few TB. EMC Isilon can't even do active/active between sites, only locally via Infiniband. This wrote it off for us, The only other commercial system I'd ask you to look at would be ExaBlox. We didn't price that but I hear it's "competitive". Good luck Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Gandalf C. <gan...@gm...> - 2016-11-15 17:08:58
|
Hi, let's assume a master crash, in a cluster with hundred millions of file (many, many terabytes) How long does it take for the metalogger to recover and being promoted as new master? During the whole recovery procedure I/O is blocked? |
From: Gandalf C. <gan...@gm...> - 2016-11-15 15:38:22
|
Hi, how do you manage backups in a scale-out storage like MooseFS? If you store many PB of datas, backing them up would be a pain. Restoring that, is even worse. |
From: Krzysztof K. <krz...@mo...> - 2016-11-15 06:26:02
|
MooseFS is exhibiting at the Supercomputing Conference 2016 in Salt Lake City. If you happen to be attending the conference, please feel free to stop by and say hello - we are in booth #3469. -- MooseFS Team |
From: Piotr R. K. <pio...@mo...> - 2016-11-03 17:09:27
|
Great, thanks for the update! Best regards, Peter -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com > On 03 Nov 2016, at 5:30 PM, Ricardo J. Barberis <ric...@do...> wrote: > > Great! And I confirm that mfsmount 3.0.81 works fine with the rest of the > cluster on 3.0.84. > > El Miércoles 02/11/2016, Piotr Robert Konopelko escribió: >> Hi, >> >> just a quick update for now - if you are running MooseFS Mount v. 3.0.84 >> and encountering some efficiency problems, please downgrade your clients to >> 3.0.81. >> >> There probably is a bug in the newest client and we are working on it. >> >> By the way: Please DO NOT downgrade Chunkservers if you have already >> upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and >> downgrade may cause potential data loss - older Chunkserver can't >> "understand" new header. >> >> >> Best regards, >> Peter > -- > Ricardo J. Barberis > Senior SysAdmin / IT Architect > DonWeb > La Actitud Es Todo > www.DonWeb.com > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today. http://sdm.link/xeonphi > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@do...> - 2016-11-03 16:59:16
|
Great! And I confirm that mfsmount 3.0.81 works fine with the rest of the cluster on 3.0.84. El Miércoles 02/11/2016, Piotr Robert Konopelko escribió: > Hi, > > just a quick update for now - if you are running MooseFS Mount v. 3.0.84 > and encountering some efficiency problems, please downgrade your clients to > 3.0.81. > > There probably is a bug in the newest client and we are working on it. > > By the way: Please DO NOT downgrade Chunkservers if you have already > upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and > downgrade may cause potential data loss - older Chunkserver can't > "understand" new header. > > > Best regards, > Peter -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com |
From: Wilson, S. M <st...@pu...> - 2016-11-03 01:33:45
|
Okay, thanks... I should have remembered that. Steve On Nov 2, 2016 9:04 PM, Piotr Robert Konopelko <pio...@mo...> wrote: Hi, please follow instructions available on https://moosefs.com/download.html: "There is also possibility to use version number instead of "branch" if you want to decide exactly which version of MooseFS you want to upgrade to (e.g. 3.0.81-1)", so the address would be: deb http://ppa.moosefs.com/3.0.81/apt/ubuntu/trusty trusty main Best regards, Peter Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail: pio...@mo...<mailto:pio...@mo...> www: https://moosefs.com<https://moosefs.com/> // Sent from my phone, sorry for condensed form On Nov 3, 2016 1:49 AM, "Wilson, Steven M" <st...@pu...> wrote: Hi, Is there a repository of older versions MooseFS packages like 3.0.81? In my case, I'm looking for Ubuntu 14.04/16.04 packages. Thanks! Steve On Nov 2, 2016 5:52 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Hi, > > just a quick update for now - if you are running MooseFS Mount v. 3.0.84 and encountering some efficiency problems, please downgrade your clients to 3.0.81. > > There probably is a bug in the newest client and we are working on it. > > By the way: Please DO NOT downgrade Chunkservers if you have already upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and downgrade may cause potential data loss - older Chunkserver can't "understand" new header. > > > Best regards, > Peter > > -- > > Piotr Robert Konopelko > MooseFS Technical Support Engineer > e-mail: pio...@mo...<mailto:pio...@mo...> > www: https://moosefs.com ><https://twitter.com/MooseFS> > <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > > >> On 02 Nov 2016, at 10:44 PM, Ricardo J. Barberis <ric...@do...<mailto:ric...@do...>> wrote: >> >> Damn, I just sent an email describing this exact problem, but my whole cluster >> runs on Moosefs 3.0.84. >> >> El Miércoles 02/11/2016, Wilson, Steven M escribió: >>> >>> Hi, >>> >>> >>> We have several workstations that are using the latest version of mfsmount >>> (3.0.84) and I've started to receive complaints about very slow >>> performance. I ran a few tests (untarring the Linux kernel source) and it >>> appears that on the 3.0.84 clients performance will continue to degrade >>> each time I run the test. For example, one workstation shows these results >>> from three successive runs: >>> >>> ? >>> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >>> linux-4.9-rc3 >>> >>> real 4m34.614s >>> user 0m1.416s >>> sys 0m7.480s >>> >>> real 2m57.863s >>> user 0m0.436s >>> sys 0m2.192s >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >>> linux-4.9-rc3 >>> >>> real 6m59.159s >>> user 0m1.924s >>> sys 0m7.276s >>> >>> real 5m39.582s >>> user 0m0.484s >>> sys 0m2.548s >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >>> linux-4.9-rc3 >>> >>> real 9m1.816s >>> user 0m1.928s >>> sys 0m7.160s >>> >>> real 7m54.979s >>> user 0m0.584s >>> sys 0m2.968s >>> ? >>> ?If I unmount the file system and then mount it again, performance returns >>> to normal but will degrade over time like before. >>> >>> On the other hand, if I run the same test on a different client using >>> mfsmount 3.0.81 then my performance remains stable and doesn't degrade over >>> time after heavy use. >>> >>> Is there perhaps a problem with mfsmount versions higher than 3.0.81? >>> >>> I should add that none of my servers (master, metalogger, chunk) are >>> running higher than 3.0.81 so this could be due to a mismatch between >>> mfsmount and server version. I doubt this but I wanted to mention it. >>> >>> >>> Just to mention in passing, the same test on a local disk is blazingly fast >>> in comparison. I understand that this is a really tortuous test for a >>> distributed file system but the performance discrepancy is quite >>> substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178 >>> seconds for the rm). Here are the timings on the local disk:? >>> >>> stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm >>> -rf linux-4.9-rc3 >>> >>> real 0m5.419s >>> user 0m0.304s >>> sys 0m2.368s >>> >>> real 0m1.038s >>> user 0m0.052s >>> sys 0m0.976s >>> ? >>> >>> Thanks, >>> >>> >>> Steve >> >> >> -- >> Ricardo J. Barberis >> Senior SysAdmin / IT Architect >> DonWeb >> La Actitud Es Todo >> www.DonWeb.com<http://www.donweb.com/> >> >> ------------------------------------------------------------------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today. http://sdm.link/xeonphi >> _________________________________________ >> moosefs-users mailing list >> moo...@li...<mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > |
From: Piotr R. K. <pio...@mo...> - 2016-11-03 01:07:35
|
<p dir="ltr">BTW: You don't have to replace normal MooseFS repository, just add mentioned 3.0.81 entry to /etc/apt/sources.list.d/moosefs.list (next to "normal" repo entry) and issue:</p> <p dir="ltr"># apt update<br> # apt install moosefs-client=3.0.81-1<br><br><br></p> <p dir="ltr">Best regards,<br> Peter</p> <p dir="ltr">Piotr Robert Konopelko <br> MooseFS Technical Support Engineer <br> e-mail: pio...@mo... <br> www: https://moosefs.com</p> <p dir="ltr">// Sent from my phone, sorry for condensed form</p> <div class="gmail_extra"><br><div class="gmail_quote">On Nov 3, 2016 2:03 AM, Piotr Robert Konopelko <pio...@mo...> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">Hi,</p> <p dir="ltr">please follow instructions available on https://moosefs.com/download.html:</p> <p dir="ltr">"There is also possibility to use version number instead of "branch" if you want to decide exactly which version of MooseFS you want to upgrade to (e.g. 3.0.81-1)",</p> <p dir="ltr">so the address would be:</p> <p dir="ltr">deb http://ppa.moosefs.com/3.0.81/apt/ubuntu/trusty trusty main<br /><br /></p> <p dir="ltr">Best regards,<br /> Peter</p> <p dir="ltr">Piotr Robert Konopelko <br /> MooseFS Technical Support Engineer <br /> e-mail: <a href="mailto:piotr.konopelko@moosefs.com">piotr.konopelko@moosefs.com</a> <br /> www: <a href="https://moosefs.com/">https://moosefs.com</a></p> <p dir="ltr">// Sent from my phone, sorry for condensed form</p> <div><br /><div class="elided-text">On Nov 3, 2016 1:49 AM, "Wilson, Steven M" <stevew@purdue.edu> wrote:<br /><blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex"> <div> <p dir="ltr">Hi,</p> <p dir="ltr">Is there a repository of older versions MooseFS packages like 3.0.81? In my case, I'm looking for Ubuntu 14.04/16.04 packages.</p> <p dir="ltr">Thanks!</p> <p dir="ltr">Steve<br /> </p> <p dir="ltr">On Nov 2, 2016 5:52 PM, Piotr Robert Konopelko <piotr.konopelko@moosefs.com> wrote:<br /> </p> <blockquote> <p dir="ltr">><br /> </p> </blockquote> <p dir="ltr">> Hi,<br /> ><br /> > just a quick update for now - if you are running MooseFS Mount v. 3.0.84 and encountering some efficiency problems, please downgrade your clients to 3.0.81.<br /> ><br /> > There probably is a bug in the newest client and we are working on it.<br /> ><br /> > By the way:<b> Please DO NOT downgrade Chunkservers if you have already upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and downgrade may cause potential data loss - older Chunkserver can't "understand" new header.</b><br /> ><br /> ><br /> > Best regards,<br /> > Peter<br /> ><br /> > -- <br /> ><br /> > Piotr Robert Konopelko <br /> > MooseFS Technical Support Engineer <br /> > e-mail: <a href="mailto:piotr.konopelko@moosefs.com">piotr.konopelko@moosefs.com</a> <br /> > www: <a href="https://moosefs.com">https://moosefs.com</a><br /> <a href="https://twitter.com/MooseFS">></a><br /> <a href="https://twitter.com/MooseFS">> </a><a href="https://www.facebook.com/moosefs"> </a><a href="https://www.linkedin.com/company/moosefs"> </a><br /> ><br /> > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email.<br /> ><br /> ><br /> </p> <blockquote> <p dir="ltr">>> On 02 Nov 2016, at 10:44 PM, Ricardo J. Barberis <<a href="mailto:ricardo.barberis@donweb.com">ricardo.barberis@donweb.com</a>> wrote:<br /> </p> </blockquote> <p dir="ltr">>><br /> >> Damn, I just sent an email describing this exact problem, but my whole cluster <br /> >> runs on Moosefs 3.0.84.<br /> >><br /> >> El Miércoles 02/11/2016, Wilson, Steven M escribió:<br /> </p> <blockquote> <p dir="ltr">>>><br /> </p> </blockquote> <p dir="ltr">>>> Hi,<br /> >>><br /> >>><br /> >>> We have several workstations that are using the latest version of mfsmount<br /> >>> (3.0.84) and I've started to receive complaints about very slow<br /> >>> performance. I ran a few tests (untarring the Linux kernel source) and it<br /> >>> appears that on the 3.0.84 clients performance will continue to degrade<br /> >>> each time I run the test. For example, one workstation shows these results<br /> >>> from three successive runs:<br /> >>><br /> >>> ?<br /> >>><br /> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf<br /> >>> linux-4.9-rc3<br /> >>><br /> >>> real 4m34.614s<br /> >>> user 0m1.416s<br /> >>> sys 0m7.480s<br /> >>><br /> >>> real 2m57.863s<br /> >>> user 0m0.436s<br /> >>> sys 0m2.192s<br /> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf<br /> >>> linux-4.9-rc3<br /> >>><br /> >>> real 6m59.159s<br /> >>> user 0m1.924s<br /> >>> sys 0m7.276s<br /> >>><br /> >>> real 5m39.582s<br /> >>> user 0m0.484s<br /> >>> sys 0m2.548s<br /> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf<br /> >>> linux-4.9-rc3<br /> >>><br /> >>> real 9m1.816s<br /> >>> user 0m1.928s<br /> >>> sys 0m7.160s<br /> >>><br /> >>> real 7m54.979s<br /> >>> user 0m0.584s<br /> >>> sys 0m2.968s<br /> >>> ?<br /> >>> ?If I unmount the file system and then mount it again, performance returns<br /> >>> to normal but will degrade over time like before.<br /> >>><br /> >>> On the other hand, if I run the same test on a different client using<br /> >>> mfsmount 3.0.81 then my performance remains stable and doesn't degrade over<br /> >>> time after heavy use.<br /> >>><br /> >>> Is there perhaps a problem with mfsmount versions higher than 3.0.81?<br /> >>><br /> >>> I should add that none of my servers (master, metalogger, chunk) are<br /> >>> running higher than 3.0.81 so this could be due to a mismatch between<br /> >>> mfsmount and server version. I doubt this but I wanted to mention it.<br /> >>><br /> >>><br /> >>> Just to mention in passing, the same test on a local disk is blazingly fast<br /> >>> in comparison. I understand that this is a really tortuous test for a<br /> >>> distributed file system but the performance discrepancy is quite<br /> >>> substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178<br /> >>> seconds for the rm). Here are the timings on the local disk:?<br /> >>><br /> >>> stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm<br /> >>> -rf linux-4.9-rc3<br /> >>><br /> >>> real 0m5.419s<br /> >>> user 0m0.304s<br /> >>> sys 0m2.368s<br /> >>><br /> >>> real 0m1.038s<br /> >>> user 0m0.052s<br /> >>> sys 0m0.976s<br /> >>> ?<br /> >>><br /> >>> Thanks,<br /> >>><br /> >>><br /> >>> Steve<br /> >><br /> >><br /> >> -- <br /> >> Ricardo J. Barberis<br /> >> Senior SysAdmin / IT Architect<br /> >> DonWeb<br /> >> La Actitud Es Todo<br /> <a href="http://www.donweb.com/">>> www.DonWeb.com</a><br /> >><br /> >> ------------------------------------------------------------------------------<br /> >> Developer Access Program for Intel Xeon Phi Processors<br /> >> Access to Intel Xeon Phi processor-based developer platforms.<br /> >> With one year of Intel Parallel Studio XE.<br /> >> Training and support from Colfax.<br /> >> Order your platform today.<a href="http://sdm.link/xeonphi"> http://sdm.link/xeonphi</a><br /> >> _________________________________________<br /> >> moosefs-users mailing list<br /> <a href="mailto:moosefs-users@lists.sourceforge.net">>> moosefs-users@lists.sourceforge.net</a><br /> <a href="https://lists.sourceforge.net/lists/listinfo/moosefs-users">>> https://lists.sourceforge.net/lists/listinfo/moosefs-users</a><br /> ><br /> ></p> </div> </blockquote></div><br /></div></blockquote></div><br></div> |
From: Piotr R. K. <pio...@mo...> - 2016-11-03 01:04:13
|
<p dir="ltr">Hi,</p> <p dir="ltr">please follow instructions available on https://moosefs.com/download.html:</p> <p dir="ltr">"There is also possibility to use version number instead of "branch" if you want to decide exactly which version of MooseFS you want to upgrade to (e.g. 3.0.81-1)",</p> <p dir="ltr">so the address would be:</p> <p dir="ltr">deb http://ppa.moosefs.com/3.0.81/apt/ubuntu/trusty trusty main<br><br></p> <p dir="ltr">Best regards,<br> Peter</p> <p dir="ltr">Piotr Robert Konopelko <br> MooseFS Technical Support Engineer <br> e-mail: <a href="mailto:pio...@mo...">pio...@mo...</a> <br> www: <a href="https://moosefs.com/">https://moosefs.com</a></p> <p dir="ltr">// Sent from my phone, sorry for condensed form</p> <div class="gmail_extra"><br><div class="gmail_quote">On Nov 3, 2016 1:49 AM, "Wilson, Steven M" <st...@pu...> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div> <p dir="ltr">Hi,</p> <p dir="ltr">Is there a repository of older versions MooseFS packages like 3.0.81? In my case, I'm looking for Ubuntu 14.04/16.04 packages.</p> <p dir="ltr">Thanks!</p> <p dir="ltr">Steve<br /> </p> <p dir="ltr">On Nov 2, 2016 5:52 PM, Piotr Robert Konopelko <piotr.konopelko@moosefs.com> wrote:<br /> </p> <blockquote> <p dir="ltr">><br /> </p> </blockquote> <p dir="ltr">> Hi,<br /> ><br /> > just a quick update for now - if you are running MooseFS Mount v. 3.0.84 and encountering some efficiency problems, please downgrade your clients to 3.0.81.<br /> ><br /> > There probably is a bug in the newest client and we are working on it.<br /> ><br /> > By the way:<b> Please DO NOT downgrade Chunkservers if you have already upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and downgrade may cause potential data loss - older Chunkserver can't "understand" new header.</b><br /> ><br /> ><br /> > Best regards,<br /> > Peter<br /> ><br /> > -- <br /> ><br /> > Piotr Robert Konopelko <br /> > MooseFS Technical Support Engineer <br /> > e-mail: <a href="mailto:piotr.konopelko@moosefs.com">piotr.konopelko@moosefs.com</a> <br /> > www: <a href="https://moosefs.com">https://moosefs.com</a><br /> <a href="https://twitter.com/MooseFS">></a><br /> <a href="https://twitter.com/MooseFS">> </a><a href="https://www.facebook.com/moosefs"> </a><a href="https://www.linkedin.com/company/moosefs"> </a><br /> ><br /> > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email.<br /> ><br /> ><br /> </p> <blockquote> <p dir="ltr">>> On 02 Nov 2016, at 10:44 PM, Ricardo J. Barberis <<a href="mailto:ricardo.barberis@donweb.com">ricardo.barberis@donweb.com</a>> wrote:<br /> </p> </blockquote> <p dir="ltr">>><br /> >> Damn, I just sent an email describing this exact problem, but my whole cluster <br /> >> runs on Moosefs 3.0.84.<br /> >><br /> >> El Miércoles 02/11/2016, Wilson, Steven M escribió:<br /> </p> <blockquote> <p dir="ltr">>>><br /> </p> </blockquote> <p dir="ltr">>>> Hi,<br /> >>><br /> >>><br /> >>> We have several workstations that are using the latest version of mfsmount<br /> >>> (3.0.84) and I've started to receive complaints about very slow<br /> >>> performance. I ran a few tests (untarring the Linux kernel source) and it<br /> >>> appears that on the 3.0.84 clients performance will continue to degrade<br /> >>> each time I run the test. For example, one workstation shows these results<br /> >>> from three successive runs:<br /> >>><br /> >>> ?<br /> >>><br /> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf<br /> >>> linux-4.9-rc3<br /> >>><br /> >>> real 4m34.614s<br /> >>> user 0m1.416s<br /> >>> sys 0m7.480s<br /> >>><br /> >>> real 2m57.863s<br /> >>> user 0m0.436s<br /> >>> sys 0m2.192s<br /> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf<br /> >>> linux-4.9-rc3<br /> >>><br /> >>> real 6m59.159s<br /> >>> user 0m1.924s<br /> >>> sys 0m7.276s<br /> >>><br /> >>> real 5m39.582s<br /> >>> user 0m0.484s<br /> >>> sys 0m2.548s<br /> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf<br /> >>> linux-4.9-rc3<br /> >>><br /> >>> real 9m1.816s<br /> >>> user 0m1.928s<br /> >>> sys 0m7.160s<br /> >>><br /> >>> real 7m54.979s<br /> >>> user 0m0.584s<br /> >>> sys 0m2.968s<br /> >>> ?<br /> >>> ?If I unmount the file system and then mount it again, performance returns<br /> >>> to normal but will degrade over time like before.<br /> >>><br /> >>> On the other hand, if I run the same test on a different client using<br /> >>> mfsmount 3.0.81 then my performance remains stable and doesn't degrade over<br /> >>> time after heavy use.<br /> >>><br /> >>> Is there perhaps a problem with mfsmount versions higher than 3.0.81?<br /> >>><br /> >>> I should add that none of my servers (master, metalogger, chunk) are<br /> >>> running higher than 3.0.81 so this could be due to a mismatch between<br /> >>> mfsmount and server version. I doubt this but I wanted to mention it.<br /> >>><br /> >>><br /> >>> Just to mention in passing, the same test on a local disk is blazingly fast<br /> >>> in comparison. I understand that this is a really tortuous test for a<br /> >>> distributed file system but the performance discrepancy is quite<br /> >>> substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178<br /> >>> seconds for the rm). Here are the timings on the local disk:?<br /> >>><br /> >>> stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm<br /> >>> -rf linux-4.9-rc3<br /> >>><br /> >>> real 0m5.419s<br /> >>> user 0m0.304s<br /> >>> sys 0m2.368s<br /> >>><br /> >>> real 0m1.038s<br /> >>> user 0m0.052s<br /> >>> sys 0m0.976s<br /> >>> ?<br /> >>><br /> >>> Thanks,<br /> >>><br /> >>><br /> >>> Steve<br /> >><br /> >><br /> >> -- <br /> >> Ricardo J. Barberis<br /> >> Senior SysAdmin / IT Architect<br /> >> DonWeb<br /> >> La Actitud Es Todo<br /> <a href="http://www.donweb.com/">>> www.DonWeb.com</a><br /> >><br /> >> ------------------------------------------------------------------------------<br /> >> Developer Access Program for Intel Xeon Phi Processors<br /> >> Access to Intel Xeon Phi processor-based developer platforms.<br /> >> With one year of Intel Parallel Studio XE.<br /> >> Training and support from Colfax.<br /> >> Order your platform today.<a href="http://sdm.link/xeonphi"> http://sdm.link/xeonphi</a><br /> >> _________________________________________<br /> >> moosefs-users mailing list<br /> <a href="mailto:moosefs-users@lists.sourceforge.net">>> moosefs-users@lists.sourceforge.net</a><br /> <a href="https://lists.sourceforge.net/lists/listinfo/moosefs-users">>> https://lists.sourceforge.net/lists/listinfo/moosefs-users</a><br /> ><br /> ></p> </div> </blockquote></div><br></div> |
From: Wilson, S. M <st...@pu...> - 2016-11-03 00:50:11
|
Hi, Is there a repository of older versions MooseFS packages like 3.0.81? In my case, I'm looking for Ubuntu 14.04/16.04 packages. Thanks! Steve On Nov 2, 2016 5:52 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Hi, > > just a quick update for now - if you are running MooseFS Mount v. 3.0.84 and encountering some efficiency problems, please downgrade your clients to 3.0.81. > > There probably is a bug in the newest client and we are working on it. > > By the way: Please DO NOT downgrade Chunkservers if you have already upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and downgrade may cause potential data loss - older Chunkserver can't "understand" new header. > > > Best regards, > Peter > > -- > > Piotr Robert Konopelko > MooseFS Technical Support Engineer > e-mail: pio...@mo...<mailto:pio...@mo...> > www: https://moosefs.com ><https://twitter.com/MooseFS> > <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> > > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > > >> On 02 Nov 2016, at 10:44 PM, Ricardo J. Barberis <ric...@do...<mailto:ric...@do...>> wrote: >> >> Damn, I just sent an email describing this exact problem, but my whole cluster >> runs on Moosefs 3.0.84. >> >> El Miércoles 02/11/2016, Wilson, Steven M escribió: >>> >>> Hi, >>> >>> >>> We have several workstations that are using the latest version of mfsmount >>> (3.0.84) and I've started to receive complaints about very slow >>> performance. I ran a few tests (untarring the Linux kernel source) and it >>> appears that on the 3.0.84 clients performance will continue to degrade >>> each time I run the test. For example, one workstation shows these results >>> from three successive runs: >>> >>> ? >>> >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >>> linux-4.9-rc3 >>> >>> real 4m34.614s >>> user 0m1.416s >>> sys 0m7.480s >>> >>> real 2m57.863s >>> user 0m0.436s >>> sys 0m2.192s >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >>> linux-4.9-rc3 >>> >>> real 6m59.159s >>> user 0m1.924s >>> sys 0m7.276s >>> >>> real 5m39.582s >>> user 0m0.484s >>> sys 0m2.548s >>> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >>> linux-4.9-rc3 >>> >>> real 9m1.816s >>> user 0m1.928s >>> sys 0m7.160s >>> >>> real 7m54.979s >>> user 0m0.584s >>> sys 0m2.968s >>> ? >>> ?If I unmount the file system and then mount it again, performance returns >>> to normal but will degrade over time like before. >>> >>> On the other hand, if I run the same test on a different client using >>> mfsmount 3.0.81 then my performance remains stable and doesn't degrade over >>> time after heavy use. >>> >>> Is there perhaps a problem with mfsmount versions higher than 3.0.81? >>> >>> I should add that none of my servers (master, metalogger, chunk) are >>> running higher than 3.0.81 so this could be due to a mismatch between >>> mfsmount and server version. I doubt this but I wanted to mention it. >>> >>> >>> Just to mention in passing, the same test on a local disk is blazingly fast >>> in comparison. I understand that this is a really tortuous test for a >>> distributed file system but the performance discrepancy is quite >>> substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178 >>> seconds for the rm). Here are the timings on the local disk:? >>> >>> stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm >>> -rf linux-4.9-rc3 >>> >>> real 0m5.419s >>> user 0m0.304s >>> sys 0m2.368s >>> >>> real 0m1.038s >>> user 0m0.052s >>> sys 0m0.976s >>> ? >>> >>> Thanks, >>> >>> >>> Steve >> >> >> -- >> Ricardo J. Barberis >> Senior SysAdmin / IT Architect >> DonWeb >> La Actitud Es Todo >> www.DonWeb.com<http://www.donweb.com/> >> >> ------------------------------------------------------------------------------ >> Developer Access Program for Intel Xeon Phi Processors >> Access to Intel Xeon Phi processor-based developer platforms. >> With one year of Intel Parallel Studio XE. >> Training and support from Colfax. >> Order your platform today. http://sdm.link/xeonphi >> _________________________________________ >> moosefs-users mailing list >> moo...@li...<mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> > > |
From: Piotr R. K. <pio...@mo...> - 2016-11-02 21:52:09
|
Hi, just a quick update for now - if you are running MooseFS Mount v. 3.0.84 and encountering some efficiency problems, please downgrade your clients to 3.0.81. There probably is a bug in the newest client and we are working on it. By the way: Please DO NOT downgrade Chunkservers if you have already upgraded them to 3.0.84, because there's new, 8k header in CS 3.0.84 and downgrade may cause potential data loss - older Chunkserver can't "understand" new header. Best regards, Peter -- <https://moosefs.com/>Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail: pio...@mo... <mailto:pio...@mo...> www: https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 02 Nov 2016, at 10:44 PM, Ricardo J. Barberis <ric...@do...> wrote: > > Damn, I just sent an email describing this exact problem, but my whole cluster > runs on Moosefs 3.0.84. > > El Miércoles 02/11/2016, Wilson, Steven M escribió: >> Hi, >> >> >> We have several workstations that are using the latest version of mfsmount >> (3.0.84) and I've started to receive complaints about very slow >> performance. I ran a few tests (untarring the Linux kernel source) and it >> appears that on the 3.0.84 clients performance will continue to degrade >> each time I run the test. For example, one workstation shows these results >> from three successive runs: >> >> ? >> >> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >> linux-4.9-rc3 >> >> real 4m34.614s >> user 0m1.416s >> sys 0m7.480s >> >> real 2m57.863s >> user 0m0.436s >> sys 0m2.192s >> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >> linux-4.9-rc3 >> >> real 6m59.159s >> user 0m1.924s >> sys 0m7.276s >> >> real 5m39.582s >> user 0m0.484s >> sys 0m2.548s >> root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf >> linux-4.9-rc3 >> >> real 9m1.816s >> user 0m1.928s >> sys 0m7.160s >> >> real 7m54.979s >> user 0m0.584s >> sys 0m2.968s >> ? >> ?If I unmount the file system and then mount it again, performance returns >> to normal but will degrade over time like before. >> >> On the other hand, if I run the same test on a different client using >> mfsmount 3.0.81 then my performance remains stable and doesn't degrade over >> time after heavy use. >> >> Is there perhaps a problem with mfsmount versions higher than 3.0.81? >> >> I should add that none of my servers (master, metalogger, chunk) are >> running higher than 3.0.81 so this could be due to a mismatch between >> mfsmount and server version. I doubt this but I wanted to mention it. >> >> >> Just to mention in passing, the same test on a local disk is blazingly fast >> in comparison. I understand that this is a really tortuous test for a >> distributed file system but the performance discrepancy is quite >> substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178 >> seconds for the rm). Here are the timings on the local disk:? >> >> stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm >> -rf linux-4.9-rc3 >> >> real 0m5.419s >> user 0m0.304s >> sys 0m2.368s >> >> real 0m1.038s >> user 0m0.052s >> sys 0m0.976s >> ? >> >> Thanks, >> >> >> Steve > > -- > Ricardo J. Barberis > Senior SysAdmin / IT Architect > DonWeb > La Actitud Es Todo > www.DonWeb.com <http://www.donweb.com/> > > ------------------------------------------------------------------------------ > Developer Access Program for Intel Xeon Phi Processors > Access to Intel Xeon Phi processor-based developer platforms. > With one year of Intel Parallel Studio XE. > Training and support from Colfax. > Order your platform today. http://sdm.link/xeonphi > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@do...> - 2016-11-02 21:44:43
|
Damn, I just sent an email describing this exact problem, but my whole cluster runs on Moosefs 3.0.84. El Miércoles 02/11/2016, Wilson, Steven M escribió: > Hi, > > > We have several workstations that are using the latest version of mfsmount > (3.0.84) and I've started to receive complaints about very slow > performance. I ran a few tests (untarring the Linux kernel source) and it > appears that on the 3.0.84 clients performance will continue to degrade > each time I run the test. For example, one workstation shows these results > from three successive runs: > > ? > > root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf > linux-4.9-rc3 > > real 4m34.614s > user 0m1.416s > sys 0m7.480s > > real 2m57.863s > user 0m0.436s > sys 0m2.192s > root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf > linux-4.9-rc3 > > real 6m59.159s > user 0m1.924s > sys 0m7.276s > > real 5m39.582s > user 0m0.484s > sys 0m2.548s > root@saturn:/net/em/test# time tar xf linux-4.9-rc3.tar; time rm -rf > linux-4.9-rc3 > > real 9m1.816s > user 0m1.928s > sys 0m7.160s > > real 7m54.979s > user 0m0.584s > sys 0m2.968s > ? > ?If I unmount the file system and then mount it again, performance returns > to normal but will degrade over time like before. > > On the other hand, if I run the same test on a different client using > mfsmount 3.0.81 then my performance remains stable and doesn't degrade over > time after heavy use. > > Is there perhaps a problem with mfsmount versions higher than 3.0.81? > > I should add that none of my servers (master, metalogger, chunk) are > running higher than 3.0.81 so this could be due to a mismatch between > mfsmount and server version. I doubt this but I wanted to mention it. > > > Just to mention in passing, the same test on a local disk is blazingly fast > in comparison. I understand that this is a really tortuous test for a > distributed file system but the performance discrepancy is quite > substantial (5 seconds vs. 275 seconds for the untar, 1 second vs. 178 > seconds for the rm). Here are the timings on the local disk:? > > stevew@otter:/otter-scratch/TEST$ time tar xf linux-4.9-rc3.tar; time rm > -rf linux-4.9-rc3 > > real 0m5.419s > user 0m0.304s > sys 0m2.368s > > real 0m1.038s > user 0m0.052s > sys 0m0.976s > ? > > Thanks, > > > Steve -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com |
From: Ricardo J. B. <ric...@do...> - 2016-11-02 21:31:38
|
Hello, I'm having a strange issue with Moosefs 3.x, my servers (1 metadata, 2 chunkservers) and clients (webhosting servers) are all CentOS 7 with Moosefs 3.0.84. My clients also have a Moosefs 2.x cluster mounted so I compiled moosefs-client-3.0.84 (cause mfsmount 3.x can't mount from moosefs 2.x master). The day after we started using moosefs 3.x we noticed some processes (tar xf, rm -f) got stuck for a long time, we killed them, remounted moosefs 3.x and started the processes again and they completed quickly and without errors. Here's an example showing how performance degrades over time for writing: [root@centos7 ~] # /usr/local/moosefs-3/bin/mfsmount --version MFS version 3.0.84-1 FUSE library version: 2.9.2 fusermount version: 2.9.2 [root@centos7 ~] # time tar xf joomla.tar -C /mfs/joomla real 0m17.145s user 0m0.273s sys 0m0.361s [root@centos7 ~] # time tar xf joomla.tar -C /mfs/joomla real 0m35.582s user 0m0.298s sys 0m0.521s [root@centos7 ~] # time tar xf joomla.tar -C /mfs/joomla real 0m59.039s user 0m0.358s sys 0m0.605s [root@centos7 ~] # time tar xf joomla.tar -C /mfs/joomla real 1m24.846s user 0m0.350s sys 0m0.627s [root@centos7 ~] # time tar xf joomla.tar -C /mfs/joomla real 1m37.159s user 0m0.376s sys 0m0.686s Another example: [root@centos7 ~] # for N in {0..4} ; do echo -e "\nUncompressing in /mfs/joomla - ${N}" ; mkdir -p /mfs/joomla ; sleep 2 ; time tar xf /usr/src/joomla.tar -C /mfs/joomla ; echo "Sleeping 20..." ; sleep 20 ; echo -e "\nRemoving /mfs/joomla - ${N}" ; time rm -fr /mfs/joomla ; echo "Sleeping 5..." ; sleep 5 ; done Uncompressing in /mfs/joomla - 0 real 0m12.545s user 0m0.255s sys 0m0.272s Sleeping 20... Removing /mfs/joomla - 0 real 0m3.524s user 0m0.022s sys 0m0.127s Sleeping 5... Uncompressing in /mfs/joomla - 1 real 0m15.138s user 0m0.289s sys 0m0.289s Sleeping 20... Removing /mfs/joomla - 1 real 0m6.321s user 0m0.025s sys 0m0.166s Sleeping 5... Uncompressing in /mfs/joomla - 2 real 0m14.181s user 0m0.262s sys 0m0.266s Sleeping 20... Removing /mfs/joomla - 2 real 0m6.539s user 0m0.025s sys 0m0.134s Sleeping 5... Uncompressing in /mfs/joomla - 3 real 0m15.442s user 0m0.253s sys 0m0.253s Sleeping 20... Removing /mfs/joomla - 3 real 0m8.541s user 0m0.033s sys 0m0.171s Sleeping 5... Uncompressing in /mfs/joomla - 4 real 0m17.717s user 0m0.257s sys 0m0.280s Sleeping 20... Removing /mfs/joomla - 4 real 0m11.533s user 0m0.031s sys 0m0.172s Sleeping 5... If I mount the same cluster (mfsmaster 3.x) with mfsmount 2.x it works as expected (but notice that moosefs 3.x works way faster initially!), eg: [root@centos7 ~] # for N in {0..4} ; do echo -e "\nUncompressing in /mfs/joomla - ${N}" ; mkdir -p /mfs/joomla ; sleep 2 ; time tar xf /usr/src/joomla.tar -C /mfs/joomla ; echo "Sleeping 20..." ; sleep 20 ; echo -e "\nRemoving /mfs/joomla - ${N}" ; time rm -fr /mfs/joomla ; echo "Sleeping 5..." ; sleep 5 ; done Uncompressing in /mfs/joomla - 0 real 0m31.878s user 0m0.311s sys 0m0.394s Sleeping 20... Removing /mfs/joomla - 0 real 0m3.495s user 0m0.026s sys 0m0.118s Sleeping 5... Uncompressing in /mfs/joomla - 1 real 0m25.544s user 0m0.283s sys 0m0.394s Sleeping 20... Removing /mfs/joomla - 1 real 0m3.435s user 0m0.017s sys 0m0.127s Sleeping 5... Uncompressing in /mfs/joomla - 2 real 0m26.498s user 0m0.258s sys 0m0.314s Sleeping 20... Removing /mfs/joomla - 2 real 0m2.725s user 0m0.022s sys 0m0.138s Sleeping 5... Uncompressing in /mfs/joomla - 3 real 0m21.142s user 0m0.276s sys 0m0.344s Sleeping 20... Removing /mfs/joomla - 3 real 0m2.718s user 0m0.020s sys 0m0.110s Sleeping 5... Uncompressing in /mfs/joomla - 4 real 0m22.301s user 0m0.259s sys 0m0.292s Sleeping 20... Removing /mfs/joomla - 4 real 0m2.942s user 0m0.025s sys 0m0.106s Sleeping 5... I tried with several combinations of mfsnoxattrs, mfsnoposixlocks and mfsnobsdlocks. I also set 'ATIME_MODE = 4' on the master but it made no difference. Any hints, optimization tips, etc are welcome! Regards, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com |