You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Valeri G. <ga...@ki...> - 2016-04-21 15:02:38
|
Dear Experts, I have sort of weirdness I just stumbled over. I migrates some directories to moosefs (free opes source version, standard installation "by the book" on FreeBSD). At some point I decided to migrate the directory back to local (UFS) filesystem on the machine. Here is weirdness I observe: after i run rsync -avu /path/do/mfs/mount/directory /path/to/local/fs I do get the content of directory on local filesystem as expected with all correct file attributes (user/group ownership, creation time, content). However, the source files on moosefs change their time stamps, as if the report last access time instead of creation time. They were reporting correct creation times after they were rsync'ed from local filesystem to moosefs originally. This basically renders rsync command useless (and damaging if you care about file creation times) when you rsync from moosefs (elsewhere). What am I doing wrong? I've used Linux and Unix for over decade an a half, ans used rsync over comparable period of time. What I hit with moosefs gives me kind of shock, and I can't figure myself what could I be doing wrong. Thanks in advance for all your advices. Valeri ++++++++++++++++++++++++++++++++++++++++ Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247 ++++++++++++++++++++++++++++++++++++++++ |
From: Alex C. <ac...@in...> - 2016-04-15 18:18:27
|
Hi List, Deleting the metadata.mfs solved this (I did backup up the whole /var/lib/mfs first as a precaution). No files were lost. Cheers Alex On 15/04/16 14:49, Alex Crow wrote: > Hi, > > A really weird issue. mfsmaster is not starting, complaining that > metadata.mfs does not exist, when it actually does. Note that the > machine was forcibly rebooted just before this started. > > [root@glenrock ~]# mfsmaster start > open files limit has been set to: 16384 > working directory: /var/lib/mfs > lockfile created and locked > initializing mfsmaster modules ... > exports file has been loaded > topology file has been loaded > loading metadata ... > loading sessions data ... ok (0.0000) > loading label data ... ok (0.0000) > loading objects (files,directories,etc.) ... ok (0.1155) > loading names ... ok (0.2793) > loading deletion timestamps ... ok (0.0000) > loading quota definitions ... ok (0.0000) > loading xattr data ... ok (0.0000) > loading posix_acl data ... ok (0.0289) > loading open files data ... ok (0.0000) > loading flock_locks data ... ok (0.0000) > loading posix_locks data ... loading posix_locks: lock on closed file !!! > cleaning metadata ... > cleaning objects ... done > cleaning names ... done > cleaning deletion timestamps ... done > cleaning quota definitions ... done > cleaning chunks data ...done > cleaning xattr data ...done > cleaning posix_acl data ...done > cleaning flock locks data ...done > cleaning posix locks data ...done > cleaning chunkservers data ...done > cleaning open files data ...done > cleaning sessions data ...done > cleaning label sets data ...done > metadata have been cleaned > error loading metadata.mfs - try using option '-a' > init: metadata manager failed !!! > error occured during initialization - exiting > > > [root@glenrock ~]# mfsmaster -a > open files limit has been set to: 16384 > working directory: /var/lib/mfs > lockfile created and locked > initializing mfsmaster modules ... > exports file has been loaded > topology file has been loaded > loading metadata ... > loading sessions data ... ok (0.0000) > loading label data ... ok (0.0000) > loading objects (files,directories,etc.) ... ok (0.1160) > loading names ... ok (0.2792) > loading deletion timestamps ... ok (0.0000) > loading quota definitions ... ok (0.0000) > loading xattr data ... ok (0.0000) > loading posix_acl data ... ok (0.0291) > loading open files data ... ok (0.0000) > loading flock_locks data ... ok (0.0000) > loading posix_locks data ... loading posix_locks: lock on closed file !!! > cleaning metadata ... > cleaning objects ... done > cleaning names ... done > cleaning deletion timestamps ... done > cleaning quota definitions ... done > cleaning chunks data ...done > cleaning xattr data ...done > cleaning posix_acl data ...done > cleaning flock locks data ...done > cleaning posix locks data ...done > cleaning chunkservers data ...done > cleaning open files data ...done > cleaning sessions data ...done > cleaning label sets data ...done > metadata have been cleaned > error loading metadata file (metadata.mfs): ENOENT (No such file or > directory) > init: metadata manager failed !!! > error occured during initialization - exiting > > [root@glenrock ~]# ls -l /var/lib/mfs/metadata.mfs > -rw-r----- 1 mfs mfs 65582156 Apr 15 11:23 /var/lib/mfs/metadata.mfs > > Can anyone help as this cluster was running fine before and we need to > have it working again by Monday. > > We do have several metaloggers that have been running the whole time if > that helps. > > Best regards > > Alex > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > ------------------------------------------------------------------------------ > Find and fix application performance issues faster with Applications Manager > Applications Manager provides deep performance insights into multiple tiers of > your business applications. It resolves application problems quickly and > reduces your MTTR. Get your free trial! > https://ad.doubleclick.net/ddm/clk/302982198;130105516;z > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Alex C. <ac...@in...> - 2016-04-15 13:49:35
|
Hi, A really weird issue. mfsmaster is not starting, complaining that metadata.mfs does not exist, when it actually does. Note that the machine was forcibly rebooted just before this started. [root@glenrock ~]# mfsmaster start open files limit has been set to: 16384 working directory: /var/lib/mfs lockfile created and locked initializing mfsmaster modules ... exports file has been loaded topology file has been loaded loading metadata ... loading sessions data ... ok (0.0000) loading label data ... ok (0.0000) loading objects (files,directories,etc.) ... ok (0.1155) loading names ... ok (0.2793) loading deletion timestamps ... ok (0.0000) loading quota definitions ... ok (0.0000) loading xattr data ... ok (0.0000) loading posix_acl data ... ok (0.0289) loading open files data ... ok (0.0000) loading flock_locks data ... ok (0.0000) loading posix_locks data ... loading posix_locks: lock on closed file !!! cleaning metadata ... cleaning objects ... done cleaning names ... done cleaning deletion timestamps ... done cleaning quota definitions ... done cleaning chunks data ...done cleaning xattr data ...done cleaning posix_acl data ...done cleaning flock locks data ...done cleaning posix locks data ...done cleaning chunkservers data ...done cleaning open files data ...done cleaning sessions data ...done cleaning label sets data ...done metadata have been cleaned error loading metadata.mfs - try using option '-a' init: metadata manager failed !!! error occured during initialization - exiting [root@glenrock ~]# mfsmaster -a open files limit has been set to: 16384 working directory: /var/lib/mfs lockfile created and locked initializing mfsmaster modules ... exports file has been loaded topology file has been loaded loading metadata ... loading sessions data ... ok (0.0000) loading label data ... ok (0.0000) loading objects (files,directories,etc.) ... ok (0.1160) loading names ... ok (0.2792) loading deletion timestamps ... ok (0.0000) loading quota definitions ... ok (0.0000) loading xattr data ... ok (0.0000) loading posix_acl data ... ok (0.0291) loading open files data ... ok (0.0000) loading flock_locks data ... ok (0.0000) loading posix_locks data ... loading posix_locks: lock on closed file !!! cleaning metadata ... cleaning objects ... done cleaning names ... done cleaning deletion timestamps ... done cleaning quota definitions ... done cleaning chunks data ...done cleaning xattr data ...done cleaning posix_acl data ...done cleaning flock locks data ...done cleaning posix locks data ...done cleaning chunkservers data ...done cleaning open files data ...done cleaning sessions data ...done cleaning label sets data ...done metadata have been cleaned error loading metadata file (metadata.mfs): ENOENT (No such file or directory) init: metadata manager failed !!! error occured during initialization - exiting [root@glenrock ~]# ls -l /var/lib/mfs/metadata.mfs -rw-r----- 1 mfs mfs 65582156 Apr 15 11:23 /var/lib/mfs/metadata.mfs Can anyone help as this cluster was running fine before and we need to have it working again by Monday. We do have several metaloggers that have been running the whole time if that helps. Best regards Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Anatoliy L. <ana...@ga...> - 2016-04-05 09:08:14
|
Hello, Thank you for moosefs, Its really good product . I wanted ask you for help . I faced problem with failover on 2.X version . For now we use 2.0.84-1 For our failover mechanism we use keepalived . we have 2 script vip-up.sh and vip-down.sh . Main idea than when we have problem with one server . VIP-down script stop mfsmaster and start mfsmetalogger . and down vip interface . VIP-up script 1) up vip interface 2) mfsmetalogger stop 3) start /usr/sbin/mfsmaster -a But we face some problem with master change . Some data lost after we change master . We test it with rsync and dd . We also try play with META_DOWNLOAD_FREQ but it has not given results . Can you please advice us what we doing wrong ? Or advice us how make good failover for 2.X version P.S Some error what we get for example . hole in change files (entries from changelog_ml.0.mfs:17081 to changelog_ml.0.mfs:17170 are missing) - add more files error applying changelogs - ignoring (using best possible metadata version) Big thanks for your time . Regards, -- Anatoliy Laktionov GNS Unix Administrator 10 Pravdy Ave., 3 - 5 fl., BC KVARTAL, Kharkiv, 61022 Ukraine Mobile : (+38) 063-87-46-100 Jabber: ana...@ga... |
From: Aleksander W. <ale...@mo...> - 2016-04-05 06:04:35
|
Hi Wolfgang, This is a very good migration scenario. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 04/04/2016 11:22 PM, Wolfgang wrote: > Dear List! > > My question is how I can move a complete hdd to a new Server. > Background is that one of my chunkservers is 32bit and I want to get rid > of it. This Chunkserver has one 4TB sata disc. > I would like to put the disc into one of the existing chunkservers > (64bit) with a free sata port. > > Question is if i can do the following without loosing chunks/data: > # mark the 2 Chunkserver (32bit & 64bit) as maintenance, > # shutdown both, > # remove the disc from 32bit server, > # put it into existing 64bit chunkserver, > # startup 64bit server, and > # mount the 4TB disc > # add the disc in mfshdd.cfg > # reload the mfschunkserver > ?? > > Thanks for Info. > Regards > Wolfgang > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wolfgang <moo...@wo...> - 2016-04-04 21:40:51
|
Dear List! My question is how I can move a complete hdd to a new Server. Background is that one of my chunkservers is 32bit and I want to get rid of it. This Chunkserver has one 4TB sata disc. I would like to put the disc into one of the existing chunkservers (64bit) with a free sata port. Question is if i can do the following without loosing chunks/data: # mark the 2 Chunkserver (32bit & 64bit) as maintenance, # shutdown both, # remove the disc from 32bit server, # put it into existing 64bit chunkserver, # startup 64bit server, and # mount the 4TB disc # add the disc in mfshdd.cfg # reload the mfschunkserver ?? Thanks for Info. Regards Wolfgang |
From: Alex C. <ac...@in...> - 2016-03-29 17:30:17
|
Hi, We have two installations, one cluster on PRO and one on free version (for backups). This problem affects the backup cluster (a pair of servers, replica 2). On the PRO cluster, I have: total space avail space trash space trash files sustained space sustained files all fs objects directories files chunks all chunk copies regular chunk copies 86 TiB 56 TiB 1.0 GiB 2324 0 B 0 61381852 2984334 58397444 58053021 116106090 116106090 and usage: # ip version state metadata version RAM used CPU used last successful metadata save last metadata save duration last metadata save status exports checksum 1 172.16.41.202 3.0.73 PRO LEADER 1 513 203 176 19 GiB all:10.34% sys:1.38% user:8.96% Tue Mar 29 01:02:42 2016 ~2.7m Downloaded from other master E5ABFEBB6AB52D4A 2 192.168.22.19 3.0.73 PRO FOLLOWER 1 513 203 176 17 GiB all:0.65% sys:0.55% user:0.10% Tue Mar 29 18:00:30 2016 ~30.0s Saved in background E5ABFEBB6AB52D4A On the backup cluster, I have: total space avail space trash space trash files sustained space sustained files all fs objects directories files chunks all chunk copies regular chunk copies 0 B 0 B 158 GiB 88697 0 B 0 80913065 11751374 69051819 70046972 127354504 127354504 and usage: # ip version state metadata version RAM used CPU used last successful metadata save last metadata save duration last metadata save status exports checksum 1 172.16.26.20 3.0.74 - 2 025 942 120 39 GiB all:12.85% sys:7.07% user:5.78% Tue Mar 29 18:01:12 2016 ~1.2m Saved in background E5ABFEBB6AB52D4A As you can see, the backup cluster (free version) uses much more RAM per file than the main, Pro cluster. I did have some previous problems after attempting snapshots of the whole MooseFS tree where reports stated I had over 240,000,000 files (!) but on removing the snapshot the file number went back to normal. Any ideas? I can send a metadata file via scp if required. Best regards Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). |
From: Piotr R. K. <pio...@mo...> - 2016-03-27 23:57:10
|
Dear Eugene, could you please send us some logs from Master Server from the time these lags were occuring? Best regards, -- <https://moosefs.com/> Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail : pio...@mo... <mailto:pio...@mo...> www : https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 25 Mar 2016, at 8:46 PM, Евгений Дятлов <jen...@gm...> wrote: > > Who knows what does it means when CPU graph of moosefs master has some lugs like this: > http://prntscr.com/ajzfsc > > At this moments everything looks normally. Users can write|read any files. Moosefs ver 2.0.88. > > > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://pubads.g.doubleclick.net/gampad/clk?id=278785351&iu=/4140 > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Евгений Д. <jen...@gm...> - 2016-03-25 19:46:52
|
Who knows what does it means when CPU graph of moosefs master has some lugs like this: http://prntscr.com/ajzfsc At this moments everything looks normally. Users can write|read any files. Moosefs ver 2.0.88. |
From: Piotr R. K. <pio...@mo...> - 2016-03-09 19:11:31
|
I mean of course /var/lib/mfs tarball(s). Best regards, -- <https://moosefs.com/> Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail : pio...@mo... <mailto:pio...@mo...> www : https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 09 Mar 2016, at 8:06 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Hi, > > if it will be possible, we'll try to look at it tomorrow. > > Please send me in a private message your public SSH key, then I'll add it onto our server and send you a path to upload files (SCP). Please create a tarball from master and the other one from metalogger. > > Best regards, > > -- > > <Mail Attachment.png> <https://moosefs.com/> > > Piotr Robert Konopelko > MooseFS Technical Support Engineer > e-mail : pio...@mo... <mailto:pio...@mo...> > www : https://moosefs.com <https://moosefs.com/> > > <Mail Attachment.png> <https://twitter.com/MooseFS><Mail Attachment.png> <https://www.facebook.com/moosefs><Mail Attachment.png> <https://www.linkedin.com/company/moosefs><Mail Attachment.png> <https://github.com/moosefs> > This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > > >> On 09 Mar 2016, at 6:30 PM, Richard Morris <r.m...@ba... <mailto:r.m...@ba...>> wrote: >> >> Hello, >> >> We’re using version 1.6.25 which has been stable until now. This afternoon I noticed that the mfsmaster was not responding via the CGI – Although the server was pinging it was not responding via Putty or via the console. >> >> A restart brought it back. I then restarted the other servers in the group. I then started the process of bringing MFS up by running the mfsmaster start command on the master. I got an error: >> >> “can't open metadata file >> if this is new installation then rename /var/mfs/metadata.mfs.empty as /var/mfs/etadata.mfs >> init: file system manager failed !!! >> error occured during initialization – exiting” >> >> So, I then ran the mfsmetarestore –a command, at which point I got another error: >> >> “connecting files and chunks ... ok >> 335588499: unknown entry '' >> 335588499: error: 32 (Data mismatch)” >> >> And the master server failed to start. >> >> So, I then went to the metalogger got the files off there and ran the mfsmetarestore –a command, to recreate the mfsmaster. >> >> This was successful (if created the metadata.mfs file) however there were hundreds of: >> >> merge error - possibly corrupted input file - ignore entry >> merge error - possibly corrupted input file - ignore entry >> merge error - possibly corrupted input file - ignore entry >> store metadata into file: /var/mfs/metadata.mfs >> >> My question is this: I still have the original MFSmaster server (which still has the error 32 on it) as well as a sort of repaired metadata.mfs file which I could use. Which would be better (assuming that someone would come to my aid and repair the file on the master)? Also, assuming I have to use the repaired metadata.mfs file, when the server connects to the chunk servers it’ll presumably see more stuff within the chunks that it has in the database – will it then (a) add the extra data from the chunks, (b) fail to start or even worse (c) delete the extra chunks? >> >> I’m happy to provide whatever files are needed, although they do seem to be in the region of 90 megs so will place them on our download portal as requested. >> >> We have 3 chunk servers, 2 share servers (including one metalogger) plus the master and metalogger. >> >> If anyone could help that’ll be grand. >> >> Cheers, >> >> Richard. >> >> This message (and any associated files) is intended only for the use of the individual or entity to which it is addressed and may contain information that is confidential, subject to copyright or constitutes a trade secret. If you are not the intended recipient you are hereby notified that any dissemination, copying or distribution of this message, or files associated with this message, is strictly prohibited. If you have received this message in error, please notify us immediately by replying to the message and deleting it from your computer. Messages sent to and from us may be monitored. Internet communications cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. Therefore, we do not accept responsibility for any errors or omissions that are present in this message, or any attachment, that have arisen as a result of e-mail transmission. If verification is required, please request a hard-copy version. Any views or opinions presented are solely those of the author and do not necessarily represent those of the company. >> ______________________________________________________________________ >> This email has been scanned by the Symantec Email Security.cloud service. >> For more information please visit http://www.symanteccloud.com <http://www.symanteccloud.com/> >> ______________________________________________________________________ >> ------------------------------------------------------------------------------ >> Transform Data into Opportunity. >> Accelerate data analysis in your applications with >> Intel Data Analytics Acceleration Library. >> Click to learn more. >> http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140_________________________________________ <http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140_________________________________________> >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Piotr R. K. <pio...@mo...> - 2016-03-09 19:06:37
|
Hi, if it will be possible, we'll try to look at it tomorrow. Please send me in a private message your public SSH key, then I'll add it onto our server and send you a path to upload files (SCP). Please create a tarball from master and the other one from metalogger. Best regards, -- <https://moosefs.com/> Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail : pio...@mo... <mailto:pio...@mo...> www : https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 09 Mar 2016, at 6:30 PM, Richard Morris <r.m...@ba...> wrote: > > Hello, > > We’re using version 1.6.25 which has been stable until now. This afternoon I noticed that the mfsmaster was not responding via the CGI – Although the server was pinging it was not responding via Putty or via the console. > > A restart brought it back. I then restarted the other servers in the group. I then started the process of bringing MFS up by running the mfsmaster start command on the master. I got an error: > > “can't open metadata file > if this is new installation then rename /var/mfs/metadata.mfs.empty as /var/mfs/etadata.mfs > init: file system manager failed !!! > error occured during initialization – exiting” > > So, I then ran the mfsmetarestore –a command, at which point I got another error: > > “connecting files and chunks ... ok > 335588499: unknown entry '' > 335588499: error: 32 (Data mismatch)” > > And the master server failed to start. > > So, I then went to the metalogger got the files off there and ran the mfsmetarestore –a command, to recreate the mfsmaster. > > This was successful (if created the metadata.mfs file) however there were hundreds of: > > merge error - possibly corrupted input file - ignore entry > merge error - possibly corrupted input file - ignore entry > merge error - possibly corrupted input file - ignore entry > store metadata into file: /var/mfs/metadata.mfs > > My question is this: I still have the original MFSmaster server (which still has the error 32 on it) as well as a sort of repaired metadata.mfs file which I could use. Which would be better (assuming that someone would come to my aid and repair the file on the master)? Also, assuming I have to use the repaired metadata.mfs file, when the server connects to the chunk servers it’ll presumably see more stuff within the chunks that it has in the database – will it then (a) add the extra data from the chunks, (b) fail to start or even worse (c) delete the extra chunks? > > I’m happy to provide whatever files are needed, although they do seem to be in the region of 90 megs so will place them on our download portal as requested. > > We have 3 chunk servers, 2 share servers (including one metalogger) plus the master and metalogger. > > If anyone could help that’ll be grand. > > Cheers, > > Richard. > > This message (and any associated files) is intended only for the use of the individual or entity to which it is addressed and may contain information that is confidential, subject to copyright or constitutes a trade secret. If you are not the intended recipient you are hereby notified that any dissemination, copying or distribution of this message, or files associated with this message, is strictly prohibited. If you have received this message in error, please notify us immediately by replying to the message and deleting it from your computer. Messages sent to and from us may be monitored. Internet communications cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. Therefore, we do not accept responsibility for any errors or omissions that are present in this message, or any attachment, that have arisen as a result of e-mail transmission. If verification is required, please request a hard-copy version. Any views or opinions presented are solely those of the author and do not necessarily represent those of the company. > ______________________________________________________________________ > This email has been scanned by the Symantec Email Security.cloud service. > For more information please visit http://www.symanteccloud.com <http://www.symanteccloud.com/> > ______________________________________________________________________ > ------------------------------------------------------------------------------ > Transform Data into Opportunity. > Accelerate data analysis in your applications with > Intel Data Analytics Acceleration Library. > Click to learn more. > http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140_________________________________________ <http://pubads.g.doubleclick.net/gampad/clk?id=278785111&iu=/4140_________________________________________> > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Richard M. <r.m...@ba...> - 2016-03-09 17:50:35
|
Hello, We’re using version 1.6.25 which has been stable until now. This afternoon I noticed that the mfsmaster was not responding via the CGI – Although the server was pinging it was not responding via Putty or via the console. A restart brought it back. I then restarted the other servers in the group. I then started the process of bringing MFS up by running the mfsmaster start command on the master. I got an error: “can't open metadata file if this is new installation then rename /var/mfs/metadata.mfs.empty as /var/mfs/etadata.mfs init: file system manager failed !!! error occured during initialization – exiting” So, I then ran the mfsmetarestore –a command, at which point I got another error: “connecting files and chunks ... ok 335588499: unknown entry '' 335588499: error: 32 (Data mismatch)” And the master server failed to start. So, I then went to the metalogger got the files off there and ran the mfsmetarestore –a command, to recreate the mfsmaster. This was successful (if created the metadata.mfs file) however there were hundreds of: merge error - possibly corrupted input file - ignore entry merge error - possibly corrupted input file - ignore entry merge error - possibly corrupted input file - ignore entry store metadata into file: /var/mfs/metadata.mfs My question is this: I still have the original MFSmaster server (which still has the error 32 on it) as well as a sort of repaired metadata.mfs file which I could use. Which would be better (assuming that someone would come to my aid and repair the file on the master)? Also, assuming I have to use the repaired metadata.mfs file, when the server connects to the chunk servers it’ll presumably see more stuff within the chunks that it has in the database – will it then (a) add the extra data from the chunks, (b) fail to start or even worse (c) delete the extra chunks? I’m happy to provide whatever files are needed, although they do seem to be in the region of 90 megs so will place them on our download portal as requested. We have 3 chunk servers, 2 share servers (including one metalogger) plus the master and metalogger. If anyone could help that’ll be grand. Cheers, Richard. This message (and any associated files) is intended only for the use of the individual or entity to which it is addressed and may contain information that is confidential, subject to copyright or constitutes a trade secret. If you are not the intended recipient you are hereby notified that any dissemination, copying or distribution of this message, or files associated with this message, is strictly prohibited. If you have received this message in error, please notify us immediately by replying to the message and deleting it from your computer. Messages sent to and from us may be monitored. Internet communications cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. Therefore, we do not accept responsibility for any errors or omissions that are present in this message, or any attachment, that have arisen as a result of e-mail transmission. If verification is required, please request a hard-copy version. Any views or opinions presented are solely those of the author and do not necessarily represent those of the company. ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________ |
From: Piotr R. K. <pio...@mo...> - 2016-03-08 09:41:49
|
Dear MooseFS Users, We are extremely pleased to announce, that MooseFS has been released on GitHub [1]. At the moment, the repository contains two branches: master (for currently developed version, that as of today happens to be MooseFS 3.0.x) and 2.0.x (containing MooseFS 2.0.x). If you want to contribute to MooseFS, we encourage you to do so by using GitHub's bug tracker. Pull requests guidelines and requirements will be published soon. If you like MooseFS and want to support it - please star the project or watch it on GitHub. Moose regards, MooseFS Team [1] https://github.com/moosefs/moosefs <https://github.com/moosefs/moosefs> root@test:~# cat /mnt/mfs/.mooseart \_\ /_/ \_\_ _/_/ \--/ /Oo\_--____ (__) ) ``\ __ | ||-' `|| || || "" "" root@test:~# -- <https://moosefs.com/> Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail : pio...@mo... <mailto:pio...@mo...> www : https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. |
From: Piotr R. K. <pio...@mo...> - 2016-03-08 08:24:33
|
Dear MooseFS Users, we released the newest versions of both MooseFS 3 and MooseFS 2 recently: 3.0.73 and 2.0.88. Please find the changes in MooseFS 3 since 3.0.71 below: MooseFS 3.0.73-1 (2016-02-11) (master) fixed restoring ARCHCHG from changelog (cli+cgi) fixed displaying master list with followers only (pro version only) (master) added using size and length quota to fix disk usage values (statfs) (master) fixed xattr bug which may lead to data corruption and segfaults (intr. in 2.1.1) (master) added 'node_info' packet (tools) added '-p' option to 'mfsdirinfo' - 'precise mode' (master) fixed edge renumeration (master) added detecting of wrong edge numbers and force renumeration in such case MooseFS 3.0.72-1 (2016-02-04) (master+cgi+cli) added global 'umask' option to exports.cfg (all) changed address of FSF in GPL licence text (debian) removed obsolete conffiles (debian) fixed copyright file (mount) fixed parsing mfsmount.cfg (system options like nodev,noexec etc. were ommited) (master) changed way how cs internal rebalance state is treated by master (as 'grace' state instead of 'overloaded') (mount) fixed bug in read module (setting etab after ranges realloc) (tools) removed obsoleted command 'mfssnapshot' Please find the changes in MooseFS 2 since 2.0.83 below: MooseFS 2.0.88-1 (2016-03-02) (master) added METADATA_SAVE_FREQ option (allow to save metadata less frequently than every hour) MooseFS 2.0.87-1 (2016-02-23) (master) fixed status returned by writechunk after network down/up MooseFS 2.0.86-1 (2016-02-22) (master) fixed initialization of ATIME_MODE MooseFS 2.0.85-1 (2016-02-11) (master) added ATIME_MODE option to set atime modification behaviour (master) added using size and length quota to fix disk usage values (statfs) (all) changed address of FSF in GPL licence text (debian) removed obsolete conffiles (debian) fixed copyright file (mount) fixed parsing mfsmount.cfg (system options like nodev,noexec etc. were ommited) (tools) removed obsoleted command 'mfssnapshot' MooseFS 2.0.84-1 (2016-01-19) (mount) fixed setting file length in write module during truncate (fixes "git svn" case) Best regards, -- <https://moosefs.com/> Piotr Robert Konopelko MooseFS Technical Support Engineer e-mail : pio...@mo... <mailto:pio...@mo...> www : https://moosefs.com <https://moosefs.com/> <https://twitter.com/MooseFS> <https://www.facebook.com/moosefs> <https://www.linkedin.com/company/moosefs> <https://github.com/moosefs> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received it by mistake, please let us know by e-mail reply and delete it from your system; you may not copy this message or disclose its contents to anyone. Finally, the recipient should check this email and any attachments for the presence of viruses. Core Technology accepts no liability for any damage caused by any virus transmitted by this email. > On 27 Jan 2016, at 5:51 PM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Dear MooseFS Users, > > today we published the newest version from 3.x branch: 3.0.71. > > Please find the changes since 3.0.69 below: > > MooseFS 3.0.71-1 (2016-01-21) > (master) fixed emptying trash issue (intr. in 3.0.64) > (master) fixed possible segfault in chunkservers database (intr. in 3.0.67) > (master) changed trash part choice from nondeterministic to deterministic > > MooseFS 3.0.70-1 (2016-01-19) > (cgi+cli) fixed displaying info when there are no active masters (intr. in 3.0.67) > (mount+common) refactoring code to be Windows ready > (mount) added option 'mfsflattrash' (makes trash look like before version 3.0.64) > (mount) added fixes for NetBSD (patch contributed by Tom Ivar Helbekkmo) > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/>------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140_________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Warren M. <wa...@an...> - 2016-03-04 15:11:45
|
At the moment, it would provide, for lack of a better term, "quiet" routing between data centers - ie, disabling the MooseFS chunkserver IPv4 address. Since IPv6 is such a much larger space, it gets around any potential NAT'ing issues. Warren Myers https://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a Subject: Re: [MooseFS-Users] Does MooseFS support IPv6? To: wa...@an...; moo...@li... From: ale...@mo... Date: Fri, 4 Mar 2016 11:20:41 +0100 Hi, I would like to say that MooseFS do not support IPv6. Can you tell as the main reason why you need IPv6 for MooseFS? What will IPv6 change in your case? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com |
From: Aleksander W. <ale...@mo...> - 2016-03-04 10:20:50
|
Hi, I would like to say that MooseFS do not support IPv6. Can you tell as the main reason why you need IPv6 for MooseFS? What will IPv6 change in your case? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |
From: Warren M. <wa...@an...> - 2016-03-04 02:24:48
|
Haven't found it in the docs, but perhaps my search-fu is poor. Does MooseFS support running on IPv6? If it doesn't, when will it? If it does, is there anything different that needs to be done to utilize IPv6 instead of IPv4 with MooseFS? Warren Myers https://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a |
From: Aleksander W. <ale...@mo...> - 2016-03-03 10:00:56
|
Hi, Please use this instruction: https://moosefs.com/download.html Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |
From: 彭智希 <pen...@16...> - 2016-03-03 09:55:53
|
Hi how do i get the version of 2.0.88? Please give me a path to download!! thanks very much!! At 2016-03-03 16:13:25, "Aleksander Wieliczko" <ale...@mo...> wrote: >Hello, > >I would like to propose another test, to know little bit more about >chunkservers disconnections. > >We have just released MooseFS 2.0.88 version with special option added >for metadata save frequency. > >METADATA_SAVE_FREQ = 1 > >New parameter allow to set how often master will store metadata. Till >version 2.0.88 default metadata save frequency was set permanently to 1 >hour. > >Our test proposition is to: >1. Update MooseFS cluster to version 2.0.88. >2. Change MooseFS master parameter METADATA_SAVE_FREQ = 4 (This is only >suggestion). >3. Check if chunkservers disconnections will occur only during metadata >save - each 4 hours. > >We are waiting for your reply and details about chunkservers disconnections. > > >By the way, which BONDING mode you are using? > >mode=0 (Balance Round Robin) >mode=1 (Active backup) >mode=2 (Balance XOR) >mode=3 (Broadcast) >mode=4 (802.3ad) >mode=5 (Balance TLB) >mode=6 (Balance ALB) > >Best regards >Aleksander Wieliczko >Technical Support Engineer >MooseFS.com <moosefs.com> |
From: 彭智希 <pen...@16...> - 2016-03-03 08:30:33
|
Hi The mode of bond is: [root@mfs-CNC-GZSX-231 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (xor) Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 70:e2:84:10:be:07 Slave queue ID: 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 70:e2:84:10:be:06 Slave queue ID: 0 At 2016-03-03 16:13:25, "Aleksander Wieliczko" <ale...@mo...> wrote: >Hello, > >I would like to propose another test, to know little bit more about >chunkservers disconnections. > >We have just released MooseFS 2.0.88 version with special option added >for metadata save frequency. > >METADATA_SAVE_FREQ = 1 > >New parameter allow to set how often master will store metadata. Till >version 2.0.88 default metadata save frequency was set permanently to 1 >hour. > >Our test proposition is to: >1. Update MooseFS cluster to version 2.0.88. >2. Change MooseFS master parameter METADATA_SAVE_FREQ = 4 (This is only >suggestion). >3. Check if chunkservers disconnections will occur only during metadata >save - each 4 hours. > >We are waiting for your reply and details about chunkservers disconnections. > > >By the way, which BONDING mode you are using? > >mode=0 (Balance Round Robin) >mode=1 (Active backup) >mode=2 (Balance XOR) >mode=3 (Broadcast) >mode=4 (802.3ad) >mode=5 (Balance TLB) >mode=6 (Balance ALB) > >Best regards >Aleksander Wieliczko >Technical Support Engineer >MooseFS.com <moosefs.com> |
From: Aleksander W. <ale...@mo...> - 2016-03-03 08:13:38
|
Hello, I would like to propose another test, to know little bit more about chunkservers disconnections. We have just released MooseFS 2.0.88 version with special option added for metadata save frequency. METADATA_SAVE_FREQ = 1 New parameter allow to set how often master will store metadata. Till version 2.0.88 default metadata save frequency was set permanently to 1 hour. Our test proposition is to: 1. Update MooseFS cluster to version 2.0.88. 2. Change MooseFS master parameter METADATA_SAVE_FREQ = 4 (This is only suggestion). 3. Check if chunkservers disconnections will occur only during metadata save - each 4 hours. We are waiting for your reply and details about chunkservers disconnections. By the way, which BONDING mode you are using? mode=0 (Balance Round Robin) mode=1 (Active backup) mode=2 (Balance XOR) mode=3 (Broadcast) mode=4 (802.3ad) mode=5 (Balance TLB) mode=6 (Balance ALB) Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |
From: 彭智希 <pen...@16...> - 2016-03-02 10:07:33
|
Hi the information of ethtool as below: [root@mfs-CNC-GZSX-231 mfs]# ethtool bond0 Settings for bond0: Supported ports: [ ] Supported link modes: Not reported Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 2000Mb/s Duplex: Full Port: Other PHYAD: 0 Transceiver: internal Auto-negotiation: off Link detected: yes [root@mfs-CNC-GZSX-231 mfs]# ethtool bond0:1 Settings for bond0:1: Supported ports: [ ] Supported link modes: Not reported Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 2000Mb/s Duplex: Full Port: Other PHYAD: 0 Transceiver: internal Auto-negotiation: off Link detected: yes At 2016-03-02 17:44:46, "Aleksander Wieliczko" <ale...@mo...> wrote: Hi, It's look really bad. You receiving wrong packages sizes from master. You have some serious network problems. This is the first time when we see such a log entry! Feb 29 13:32:37 localhost mfschunkserver[9920]: (read) packet too long (1330533152/100000) Please check all network components - even hardware (NIC card and RAM). I can also suggest to check MTU size - network cards and switch. We are waiting for your reply. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com |
From: 彭智希 <pen...@16...> - 2016-03-02 09:56:43
|
hi the information of ifconfig as below: bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 115.238.*.* netmask 255.255.255.192 broadcast 115.238.231.191 inet6 fe80::72e2:84ff:fe10:be07 prefixlen 64 scopeid 0x20<link> ether 70:e2:84:10:be:07 txqueuelen 0 (Ethernet) RX packets 38702523073 bytes 5773786695954 (5.2 TiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 53013419940 bytes 27359436449810 (24.8 TiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 bond0:1: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 101.69.*.* netmask 255.255.255.192 broadcast 101.69.175.191 ether 70:e2:84:10:be:07 txqueuelen 0 (Ethernet) eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 70:e2:84:10:be:07 txqueuelen 1000 (Ethernet) RX packets 19234284897 bytes 2899736294954 (2.6 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 32277114680 bytes 16883118272719 (15.3 TiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d20000-f7d3ffff eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 70:e2:84:10:be:07 txqueuelen 1000 (Ethernet) RX packets 19468238178 bytes 2874050401132 (2.6 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20736305277 bytes 10476318189468 (9.5 TiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d00000-f7d1ffff At 2016-03-02 17:44:46, "Aleksander Wieliczko" <ale...@mo...> wrote: Hi, It's look really bad. You receiving wrong packages sizes from master. You have some serious network problems. This is the first time when we see such a log entry! Feb 29 13:32:37 localhost mfschunkserver[9920]: (read) packet too long (1330533152/100000) Please check all network components - even hardware (NIC card and RAM). I can also suggest to check MTU size - network cards and switch. We are waiting for your reply. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com |
From: Aleksander W. <ale...@mo...> - 2016-03-02 09:44:57
|
Hi, It's look really bad. You receiving wrong packages sizes from master. You have some serious network problems. This is the first time when we see such a log entry! Feb 29 13:32:37 localhost mfschunkserver[9920]: *(read) packet too long* (1330533152/100000) Please check all network components - even hardware (NIC card and RAM). I can also suggest to check MTU size - network cards and switch. We are waiting for your reply. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> |
From: 彭智希 <pen...@16...> - 2016-03-02 08:32:54
|
Hi the log of mfschunkserver as below: Feb 29 13:32:37 localhost mfschunkserver[9920]: (read) packet too long (1330533152/100000) Feb 29 13:32:37 localhost mfschunkserver[9920]: (read) packet too long (67108864/100000) Feb 29 13:32:37 localhost mfschunkserver[9920]: (read) packet too long (115343360/100000) Feb 29 13:32:42 localhost mfschunkserver[9920]: (read) packet too long (788529152/100000) Feb 29 13:32:42 localhost mfschunkserver[9920]: (read) packet too long (3472493488/100000) Feb 29 13:32:42 localhost mfschunkserver[9920]: (read) packet too long (16777216/100000) Feb 29 13:32:42 localhost mfschunkserver[9920]: got unknown message (type:196609) Feb 29 13:32:42 localhost mfschunkserver[9920]: (read) packet too long (790644820/100000) Feb 29 13:33:29 localhost mfschunkserver[9920]: (read) packet too long (790644820/100000) Feb 29 13:33:29 localhost mfschunkserver[9920]: (read) packet too long (1330533152/100000) Feb 29 13:33:29 localhost mfschunkserver[9920]: (read) packet too long (1330533152/100000) Feb 29 13:33:29 localhost mfschunkserver[9920]: (read) packet too long (1929256211/100000) Feb 29 13:33:29 localhost mfschunkserver[9920]: (read) packet too long (16777217/100000) Feb 29 13:33:29 localhost mfschunkserver[9920]: (read) packet too long (268435456/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (1392574464/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (4283649346/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: got unknown message (type:1811942144) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (795765091/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (1635085428/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (23070466/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (1330533152/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (67108864/100000) Feb 29 13:33:35 localhost mfschunkserver[9920]: (read) packet too long (115343360/100000) Feb 29 13:33:40 localhost mfschunkserver[9920]: (read) packet too long (788529152/100000) Feb 29 13:33:41 localhost mfschunkserver[9920]: (read) packet too long (3472493488/100000) Feb 29 13:33:42 localhost mfschunkserver[9920]: (read) packet too long (16777216/100000) Feb 29 13:33:42 localhost mfschunkserver[9920]: got unknown message (type:196609) Feb 29 13:33:42 localhost mfschunkserver[9920]: (read) packet too long (790644820/100000) Feb 29 14:00:43 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 14:00:46 localhost mfschunkserver[9920]: connecting ... Feb 29 14:00:46 localhost mfschunkserver[9920]: connected to Master Feb 29 14:00:56 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 14:00:56 localhost mfschunkserver[9920]: connecting ... Feb 29 14:00:56 localhost mfschunkserver[9920]: connected to Master Feb 29 15:00:34 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 15:00:36 localhost mfschunkserver[9920]: connecting ... Feb 29 15:00:36 localhost mfschunkserver[9920]: connected to Master Feb 29 15:00:46 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 15:00:46 localhost mfschunkserver[9920]: connecting ... Feb 29 15:00:46 localhost mfschunkserver[9920]: connected to Master Feb 29 16:00:20 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 16:00:21 localhost mfschunkserver[9920]: connecting ... Feb 29 16:00:21 localhost mfschunkserver[9920]: connected to Master Feb 29 17:00:41 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 17:00:46 localhost mfschunkserver[9920]: connecting ... Feb 29 17:00:46 localhost mfschunkserver[9920]: connected to Master Feb 29 18:00:45 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 18:00:46 localhost mfschunkserver[9920]: connecting ... Feb 29 18:00:46 localhost mfschunkserver[9920]: connected to Master Feb 29 18:00:56 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 18:00:56 localhost mfschunkserver[9920]: connecting ... Feb 29 18:00:56 localhost mfschunkserver[9920]: connected to Master Feb 29 19:00:31 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 19:00:36 localhost mfschunkserver[9920]: connecting ... Feb 29 19:00:36 localhost mfschunkserver[9920]: connected to Master Feb 29 19:16:58 localhost mfschunkserver[9920]: (read) packet too long (790644820/100000) Feb 29 19:17:32 localhost mfschunkserver[9920]: (read) packet too long (790644820/100000) Feb 29 21:00:46 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 21:00:51 localhost mfschunkserver[9920]: connecting ... Feb 29 21:00:51 localhost mfschunkserver[9920]: connected to Master Feb 29 22:00:31 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 22:00:36 localhost mfschunkserver[9920]: connecting ... Feb 29 22:00:36 localhost mfschunkserver[9920]: connected to Master Feb 29 23:00:45 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 23:00:46 localhost mfschunkserver[9920]: connecting ... Feb 29 23:00:46 localhost mfschunkserver[9920]: connected to Master Feb 29 23:00:56 localhost mfschunkserver[9920]: masterconn: connection timed out Feb 29 23:00:56 localhost mfschunkserver[9920]: connecting ... Feb 29 23:00:56 localhost mfschunkserver[9920]: connected to Master At 2016-03-02 16:09:18, "Aleksander Wieliczko" <ale...@mo...> wrote: >Hi, > >Thank you for this information. >All results look reasonable. >Also I would like to add that you don't have problems with fork >operation. Metadata dump process took: 24.561 seconds: > >Feb 29 10:00:24 mfs-CNC-GZSX-231 mfsmaster20936: store process has >finished - store time: 24.561 > >The most alarming aspect in your system log is that all chunkservers >disconnected at the same time and 34 seconds after metadata save: > >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11537568169984 >(10745.20 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11535864147968 >(10743.61 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536655306752 >(10744.35 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536570953728 >(10744.27 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536672366592 >(10744.36 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11537568169984 >(10745.20 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536639004672 >(10744.33 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536581537792 >(10744.28 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536540909568 >(10744.24 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238..*.* / port: 9422, usedspace: 11536369881088 >(10744.08 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238..*.* / port: 9422, usedspace: 11536581566464 >(10744.28 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238..*.* / port: 9422, usedspace: 11536586833920 >(10744.28 GiB), totalspace: 22425943621632 (20885.79 GiB) >Feb 29 10:00:58 mfs-CNC-GZSX-231 mfsmaster20936: chunkserver >disconnected - ip: 115.238.*.* / port: 9422, usedspace: 11536620646400 >(10744.32 GiB), totalspace: 22425943621632 (20885.79 GiB) > >Would you be so kind and send us logs from one chunkserver and client >machines. > >We suspect that something is going on in your network. >It's look like your master server loosing network connection. > >Please check your network configuration. >We are waiting for your feedback. > >Best regards >Aleksander Wieliczko >Technical Support Engineer >MooseFS.com <moosefs.com> |