You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Piotr R. K. <pio...@mo...> - 2016-01-15 10:06:54
|
Hi, > On 15 Jan 2016, at 10:55 AM, Wolfgang <moo...@wo...> wrote: > Yes I thought so that there are problems with lot of files in one directory - but whats the suggested way to undelete files/folders? (in big instances as well as in small instances) > Probably it would be nice to have a cli-tool which can list deleted files in the folder-structure as it was - and then undelete it from client. ? We are thinking on rebuilding the trash and make it more comfortable to use, it is on our roadmap. Unfortunately, at this moment I can't tell / guarantee when it may happen. Thanks for your feedback! Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Wolfgang <moo...@wo...> - 2016-01-15 09:56:01
|
Hi Piotr Robert! Thank you for your answer! Please find my response below: On 2016-01-15 08:13, Piotr Robert Konopelko wrote: > Hi Wolfgang, > > >> On 15 Jan 2016, at 2:44 AM, Wolfgang <moo...@wo... >> <mailto:moo...@wo...>> wrote: >> >> Dear List! >> >> As I have recognized today according to Changelog on version 3.0.64-1 >> the mounted /mnt/meta/trash/ >> folder is now splitted up into 4096 like "000", "001", ..., "FFE", FFF >> >> # MooseFS 3.0.64-1 (2015-12-21) >> >> * (master+mount) split trash into 4096 separate sub-trashes >> >> I guess you had a reason to do this - but how am I supposed to >> undelete a complete folder including many files when this is spread >> up in thousand folders. >> > Yes, we had. The reason is problem with performing any operation on > trash when you have a big instance and a lot of files in trash (like > millions, tens of millions). > When you have such amount of files in trash, any operation on it fails > because of e.g. timeouts during connection with master. > Apart this, rm or even find is not able to operate on such a big > amount of files. Yes I thought so that there are problems with lot of files in one directory - but whats the suggested way to undelete files/folders? (in big instances as well as in small instances) Probably it would be nice to have a cli-tool which can list deleted files in the folder-structure as it was - and then undelete it from client. ? Greetings Wolfgang >> Is there any mfs tool one can use for restoring? >> >> In the previous versions I had done a >> mv *testfolder* undel/ >> >> But now I should do a: >> find /mnt/meta/trash/ -iname testfolder -exec mv {} ./undel/ \; >> to restore each file but I think this would be error prune and ugly ;-) >> > Maybe we consider adding back whole trash as a directory, probably > hidden especially for small instances. > >> Thanks for any advise. >> >> Greetings >> Wolfgang >> > > Regards, > > -- > Piotr Robert Konopelko > *MooseFS Technical Support Engineer* | moosefs.com <https://moosefs.com> > |
From: Piotr R. K. <pio...@mo...> - 2016-01-15 07:15:03
|
Hi Ricardo, thank you for your report, we'll look into this issue. Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 14 Jan 2016, at 11:54 PM, Ricardo J. Barberis <ric...@do...> wrote: > > Hello, > > I noticed that the moosefs-client rpm depends on fuse-libs, but on CentOS 6 > and 7 mfsmount doesn't work till I install fuse. > > Could that package be added as a Requires in moosefs-client's .spec file? > > Regards, > -- > Ricardo J. Barberis > Senior SysAdmin / IT Architect > DonWeb > La Actitud Es Todo > www.DonWeb.com |
From: Piotr R. K. <pio...@mo...> - 2016-01-15 07:13:29
|
Hi Wolfgang, > On 15 Jan 2016, at 2:44 AM, Wolfgang <moo...@wo...> wrote: > > Dear List! > > As I have recognized today according to Changelog on version 3.0.64-1 the mounted /mnt/meta/trash/ > folder is now splitted up into 4096 like "000", "001", ..., "FFE", FFF > > MooseFS 3.0.64-1 (2015-12-21) > (master+mount) split trash into 4096 separate sub-trashes > I guess you had a reason to do this - but how am I supposed to undelete a complete folder including many files when this is spread up in thousand folders. > Yes, we had. The reason is problem with performing any operation on trash when you have a big instance and a lot of files in trash (like millions, tens of millions). When you have such amount of files in trash, any operation on it fails because of e.g. timeouts during connection with master. Apart this, rm or even find is not able to operate on such a big amount of files. > Is there any mfs tool one can use for restoring? > > In the previous versions I had done a > mv *testfolder* undel/ > > But now I should do a: > find /mnt/meta/trash/ -iname testfolder -exec mv {} ./undel/ \; > to restore each file but I think this would be error prune and ugly ;-) > Maybe we consider adding back whole trash as a directory, probably hidden especially for small instances. > Thanks for any advise. > > Greetings > Wolfgang > Regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Wolfgang <moo...@wo...> - 2016-01-15 01:59:50
|
Dear List! As I have recognized today according to Changelog on version 3.0.64-1 the mounted /mnt/meta/trash/ folder is now splitted up into 4096 like "000", "001", ..., "FFE", FFF # MooseFS 3.0.64-1 (2015-12-21) * (master+mount) split trash into 4096 separate sub-trashes I guess you had a reason to do this - but how am I supposed to undelete a complete folder including many files when this is spread up in thousand folders. Is there any mfs tool one can use for restoring? In the previous versions I had done a mv *testfolder* undel/ But now I should do a: find /mnt/meta/trash/ -iname testfolder -exec mv {} ./undel/ \; to restore each file but I think this would be error prune and ugly ;-) Thanks for any advise. Greetings Wolfgang |
From: Ricardo J. B. <ric...@do...> - 2016-01-14 22:54:23
|
Hello, I noticed that the moosefs-client rpm depends on fuse-libs, but on CentOS 6 and 7 mfsmount doesn't work till I install fuse. Could that package be added as a Requires in moosefs-client's .spec file? Regards, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ |
From: Wilson, S. M <st...@pu...> - 2016-01-14 22:37:55
|
Hi, When I attempt to fetch an SVN repository using the "git svn fetch" command, it always fails when writing to a MooseFS volume. At some point during the process I get a "checksum mismatch" error like the following: Checksum mismatch: testmails/positive/348 expected: 42723d0ae0353368e9adb648da2eb6bc got: 637d201f8f22d9d71ba12bf7f39f14c8 I've tried this on two different MooseFS volumes from different servers (running MooseFS v. 2.0.72) with the same results. If I fetch to a local disk formatted in XFS, though, I don't get a checksum error. These are the commands that initially demonstrated the problem for me: git svn init http://emg.nysbc.org/svn/myami -s git svn fetch But I've since tested it with Spamassassin and had the same results: git svn init --prefix=origin/ -s https://svn.apache.org/repos/asf/spamassassin git svn fetch Could someone try this out on their own MooseFS installation to see if it also gives checksum errors? Is this a known bug? Thanks! Steve |
From: Tom I. H. <ti...@ha...> - 2016-01-14 14:01:58
|
Piotr Robert Konopelko <pio...@mo...> writes: > (mount) use direct I/O as a default mode on Mac OS X (due to > keep_cache bug in kernel/fuse) > [...] > (mount) use direct I/O as a default mode on FreeBSD (due to keep_cache > bug in kernel/fuse) Do you have any more information on this? And maybe a way to test whether one has this bug? I'm running MooseFS 3 on NetBSD, and I'm wondering whether I ought to do the same thing there. If the problem is in the kernel, as opposed to the userland FUSE software, we probably don't have it... Meanwhile, here's a couple of NetBSD patches I use. The /proc file system in NetBSD is modeled on the one in Linux, but is slightly different: under NetBSD, 'cat /proc/*/status' is a sort of ps. ;) -tih --- mfsmount/getgroups.c.orig 2016-01-13 18:37:01.000000000 +0100 +++ mfsmount/getgroups.c 2016-01-14 08:59:16.000000000 +0100 @@ -45,11 +45,15 @@ static int keep_alive; uint32_t get_groups(pid_t pid,gid_t gid,uint32_t **gidtab) { -#if defined(__linux__) +#if defined(__linux__) || defined(__NetBSD__) // Linux - supplementary groups are in file: // /proc/<PID>/status // line: // Groups: <GID1> <GID2> <GID3> ... +// +// NetBSD - supplementary groups are in file: +// /proc/<PID>/status +// as comma separated list of gids at end of (single) line. char proc_filename[50]; char linebuff[4096]; char *ptr; @@ -67,11 +71,18 @@ return 1; } while (fgets(linebuff,4096,fd)) { +#if defined(__NetBSD__) + if ((ptr = strrchr(linebuff, ' '))) { + if (strlen(linebuff) > (2 * strlen(ptr) + 8)) { + sprintf(linebuff, "Groups: %s", ptr); + } + } +#endif if (strncmp(linebuff,"Groups:",7)==0) { gcount = 1; ptr = linebuff+7; do { - while (*ptr==' ' || *ptr=='\t') { + while (*ptr==' ' || *ptr=='\t' || *ptr==',') { ptr++; } if (*ptr>='0' && *ptr<='9') { @@ -80,14 +91,14 @@ gcount++; } } - } while (*ptr==' ' || *ptr=='\t'); + } while (*ptr==' ' || *ptr=='\t' || *ptr==','); *gidtab = malloc(sizeof(uint32_t)*gcount); passert(*gidtab); (*gidtab)[0] = gid; n = 1; ptr = linebuff+7; do { - while (*ptr==' ' || *ptr=='\t') { + while (*ptr==' ' || *ptr=='\t' || *ptr==',') { ptr++; } if (*ptr>='0' && *ptr<='9') { @@ -97,7 +108,7 @@ n++; } } - } while ((*ptr==' ' || *ptr=='\t') && n<gcount); + } while ((*ptr==' ' || *ptr=='\t' || *ptr==',') && n<gcount); fclose(fd); return n; } --- mfsmount/sustained_inodes.c.orig 2016-01-12 08:20:34.000000000 +0100 +++ mfsmount/sustained_inodes.c 2016-01-14 09:08:00.000000000 +0100 @@ -130,7 +130,7 @@ } -#if defined(__linux__) +#if defined(__linux__) || defined(__NetBSD__) #include <dirent.h> #elif defined(__APPLE__) #include <sys/types.h> @@ -150,7 +150,7 @@ uint64_t inode; // printf("pid: %d\n",ki->ki_pid); -#if defined(__linux__) +#if defined(__linux__) || defined(__NetBSD__) char path[100]; struct stat st; snprintf(path,100,"/proc/%lld/cwd",(long long int)pid); @@ -251,7 +251,7 @@ } void sinodes_all_pids(void) { -#if defined(__linux__) +#if defined(__linux__) || defined(__NetBSD__) DIR *dd; struct dirent *de,*destorage; const char *np; -- Elections cannot be allowed to change anything. --Dr. Wolfgang Schäuble |
From: Piotr R. K. <pio...@mo...> - 2016-01-12 15:16:01
|
Dear MooseFS Users, today, after a lot of tests, we published the newest version from 3.x branch: 3.0.69. Please find the changes since 3.0.57 below: MooseFS 3.0.69-1 (2016-01-12) (mount) fixed rare case when request memory was used after being freed. MooseFS 3.0.68-1 (2016-01-08) (cgi+cli) fixed UNKNOWN state for "marked for removal" readiness (mount) added protection from releasing inode memory too early in reading module (mount) use direct I/O as a default mode on Mac OS X (due to keep_cache bug in kernel/fuse) MooseFS 3.0.67-1 (2016-01-07) (cgi+cli) added much more info for master in ELECT state (pro version only) (master+cgi+cli) added temporarily removing chunkservers in master in ELECT state (pro version only) (cgi) fixed bug in ChunkServers list (heavy load/internal rebalance) MooseFS 3.0.66-1 (2016-01-04) (mount) use direct I/O as a default mode on FreeBSD (due to keep_cache bug in kernel/fuse) MooseFS 3.0.65-1 (2015-12-23) (master+cs) added minimal version for master supervisors in chunkservers (pro version only) (cs) chunkserver can be started with empty mfshdd.cfg and work as a voter (pro version only) (master) mfsrmsnapshot will remove files without using trash (master+cs) added optional authorization MooseFS 3.0.64-1 (2015-12-21) (master+mount) split trash into 4096 separate sub-trashes MooseFS 3.0.63-1 (2015-12-17) (mount) fixed rare race condition in read (not dangerous) (mount) better read/write synchronization in read (master) new chart (used/total space) MooseFS 3.0.62-1 (2015-12-11) (cs) added ability to start with metaid read from connected hard drives (from .metaid file) (cs) added protection from using disks filled in more than 99.9% when there are other disks (master+cs+cli+cgi) added 'rebalance in progress' state to chunkservers (treated as heavy load state) (master) added ATIME_MODE option to set atime modification behaviour MooseFS 3.0.61-1 (2015-11-30) (master) fixed lookup in case of missing chunks (lookup returned ENXIO in such case) (mount) added mfstimeout option to force timeout for all I/O operations (autotools) fixed problem with 'undefined reference to rpl_malloc/rpl_realloc' MooseFS 3.0.60-1 (2015-11-23) (systemd) added TimeoutStartSec=1800 to master (master) fixed "parse error" message for broken network changelogs MooseFS 3.0.59-1 (2015-11-06) (all) fixed debug symbols (master+supervisor) added metaid check (pro version only) (master) added rejection of followers with incorrect meta version (pro version only) (mount) change type of data stored in kernel from pointers to indexes (more robust) (cli) fixed show exports in plain mode (master) added sending metaid after switching from ELECT to LEADER (pro version only) (master) removed option '-e' from GPL edition (only makes sense in pro version) (master+cs) added sending metaid to cs in ELECT state (pro version only) (master) improved metaid generation method (mount) added mfsoomdisable option (Linux only, turned on by default) (mount) added minimum retry counter to log messages in I/O modules (mount) reduced memory consumption by reducing thread stack sizes (mount) changed default malloc arena count to 2 (Linux only) MooseFS 3.0.58-1 (2015-10-30) (mount) added condition for requests in read data (request should begin before EOF) (systemd) fixed typo in mfscgiserv service file Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> |
From: Ricardo J. B. <ric...@do...> - 2016-01-11 18:09:14
|
El Viernes 08/01/2016, Aleksander Wieliczko escribió: > Hi. > Thank you for this information. > Today we released MooseFS 2.0.83 in stable version. > MooseFS 3.0.x is sill under development and it's not ready for > production environment. > Problem appears because few repositories was created together with > 3.0.64 version. > We apologize for this kind of disadvantage No problem, I just updated a couple of staging systems so it was easy to rvert back to 2.0.x :-) > About missing chunks. > If you have MooseFS master and MooseFS cgi in 2.0.83 version you should > see missing files in Info tab in CGI web page. Ah, I had an older mfscgi version on an external server (to centralize the visualization, as we have several MFS clusters). I upgraded mfscgi on that server and now I see the files with missing chukns, thanks! > Also you can use > mfscli -SMF -H mfsmaster.your.domain > To see list of missing files. Even better! > Best regards > Aleksander Wieliczko > > W dniu 2016-01-08 17:39, Ricardo J. Barberis napisał(a): > > Hello, > > > > Today I updated one of our systems and saw moosefs-client being updated > > from > > 2.0.83 to 3.0.64. > > > > Is version 3.x considered production ready then? > > Is is safe to upgrade our other clients too? > > > > > > Also, in a MooseFS 2.x cluster I have 2 missing chunks, mfsmaster logs > > "chunk > > XXX_00000001: there are no copies" but I have no idea what files those > > chunks > > refer to. How can I find out? > > > > (MooseFS 1.x used to log the filename too but it seems 2.x doesn't > > anymore). > > > > Thanks in advance, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com _____ Nota de confidencialidad: Este mensaje y archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de DonWeb.com queda prohibida. DonWeb.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud el destinatario del mismo y lo ha recibido por error, por favor, notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by DonWeb.com. DonWeb.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informenos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Piotr R. K. <pio...@mo...> - 2016-01-11 17:34:00
|
Dear MooseFS Users, we would like to inform you about some changes that were made in MooseFS 2.0 (stable) recently: MooseFS 2.0.83-1 (2016-01-08) (mount) use direct I/O as a default mode on FreeBSD and Mac OS X (due to keep_cache bug in kernel/fuse) MooseFS 2.0.82-1 (2015-11-06) (all) fixed debug symbols (master+supervisor) added metaid check (pro version only) (master) added rejection of followers with incorrect meta version (pro version only) (mount) added new mechanism for sustaining working directories (replaces mechanism added in 2.0.74) (mount) create in deleted directory returns EACCES only in OS X (ENOENT in other systems) (cli) fixed show exports in plain mode (master) added sending metaid after switching from ELECT to LEADER (pro version only) (master) removed option '-e' from GPL edition (only makes sense in pro version) (master+cs) added sending metaid to cs in ELECT state (pro version only) (master) improved metaid generation method (all) improved reloading cfg files (commented out options should be treated the same as options set to default values) MooseFS 2.0.81-1 (2015-10-30) (systemd) fixed typo in mfscgiserv service file Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 30 Oct 2015, at 10:15 AM, Piotr Robert Konopelko <pio...@mo...> wrote: > > Dear MooseFS Users, > > We would like to inform you about some changes that were made in MooseFS recently: > > We added changelog of changes in MooseFS on our website: > https://moosefs.com/documentation/changes-in-moosefs-2-0.html <https://moosefs.com/documentation/changes-in-moosefs-2-0.html> > https://moosefs.com/documentation/changes-in-moosefs-3-0.html <https://moosefs.com/documentation/changes-in-moosefs-3-0.html> > > We fixed problem with installation MooseFS packages for MacOS X 10.11 > W published MooseFS v. 2.0.80 (stable) and 3.0.57 (current / testing) > We published newer version of packages in RaspberryPI repo (3.0.55) > Several bug fixes > > Please keep in mind, that MooseFS 3.0 is not yet a stable version > and it is not recommended for production environments. > > > Please find below changes made in MooseFS 2.0 this month: > > MooseFS 2.0.80-1 (2015-10-27) > (cs,master,metalogger) added 1 second timeout when connecting to master > (cs) force disconnection from master couple seconds after term signal (frozen I/O threads can prevent CS from termination) > (systemd) fixed typo in mfscgiserv service file > (macosx) fixed packages to be compatible with OS X 10.11+ > > MooseFS 2.0.79-1 (2015-10-16) > (master) fixed setting version of new chunks registered as 'marked for removal' > (master) added stronger condition for deleting invalid chunks > > MooseFS 2.0.78-1 (2015-10-09) > (rpm) added network-online.target to Wants and After in systemd service files (startup issues after reboot) > > MooseFS 2.0.77-1 (2015-09-25) > (mount) removed using fuse notify/forget mechanism in kernel with fuse api 7.23+ (due to unexpected kernel behaviour - getcwd returns ENOENT) > > MooseFS 2.0.76-1 (2015-09-11) > (mount) fixed rare bug in writing module (unrecoverable write error could lead to infinite loop during write) > > MooseFS 2.0.75-1 (2015-09-10) > (mount) fixed data-cache issue (delete only directories from kerenel dentry cache) > (mount) inserting into xattr cache "nonexistent" xattr "security.capability" after file creation. (speed up writing small files) > (master) fixed scenario causing deleting chunks from chunkservers marked for removal > > > Please find below changes made in MooseFS 3.0 this month: > > MooseFS 3.0.57-1 (2015-10-27) > (metalogger) added 1 second timeout when connecting to master > (systemd) fixed typo in mfscgiserv service file > (macosx) fixed packages to be compatible with OS X 10.11+ > > MooseFS 3.0.56-1 (2015-10-26) > (mount) fixed reading scenario: (read from empty chunk -> write chunk -> read this chunk again) > > MooseFS 3.0.55-1 (2015-10-20) > (master+cs) added 1 second timeout when connecting to master > > MooseFS 3.0.54-1 (2015-10-16) > (master) fixed setting version of new chunks registered as 'marked for removal' > (master) added stronger condition for deleting invalid chunks > (cs) changed condition for number of blocks to change to mark disk as damaged (allow changes up to 10%) > > MooseFS 3.0.53-1 (2015-10-13) > (cs) fixed typo (cnunk) > (mount) create in deleted directory returns EACCES only in OS X (ENOENT in other systems) > > MooseFS 3.0.52-1 (2015-10-09) > (mount) added new mechanism for sustaining working directories (replaces mechanism added in 3.0.40) > (cs) force disconnection from master couple seconds after term signal (frozen I/O threads can prevent CS from termination) > (cs) when RO/RW status or total blocks changes then device is automatically marked as damaged > (master) added support for root inode end deleted inodes in MASS_RESOLVE_PATHS > (cli) fixed error displaying disconnected chunkservers > (rpm) added network-online.target to Wants and After in systemd service files (startup issues after reboot) > > > > > > Best regards, > > -- > Piotr Robert Konopelko > MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2016-01-08 17:45:42
|
Hi. Thank you for this information. Today we released MooseFS 2.0.83 in stable version. MooseFS 3.0.x is sill under development and it's not ready for production environment. Problem appears because few repositories was created together with 3.0.64 version. We apologize for this kind of disadvantage About missing chunks. If you have MooseFS master and MooseFS cgi in 2.0.83 version you should see missing files in Info tab in CGI web page. Also you can use mfscli -SMF -H mfsmaster.your.domain To see list of missing files. Best regards Aleksander Wieliczko W dniu 2016-01-08 17:39, Ricardo J. Barberis napisał(a): > Hello, > > Today I updated one of our systems and saw moosefs-client being updated > from > 2.0.83 to 3.0.64. > > Is version 3.x considered production ready then? > Is is safe to upgrade our other clients too? > > > Also, in a MooseFS 2.x cluster I have 2 missing chunks, mfsmaster logs > "chunk > XXX_00000001: there are no copies" but I have no idea what files those > chunks > refer to. How can I find out? > > (MooseFS 1.x used to log the filename too but it seems 2.x doesn't > anymore). > > Thanks in advance, |
From: Kaiser, P. <ph...@fr...> - 2016-01-08 17:12:06
|
Dear Sir or Madam, I just tried to install Moosefs on one of our new servers, and it seems that the stable repo contains the packet for moosefs-client 3.0.64-1. We use Ubuntu Precise and this package source: deb http://ppa.moosefs.com/stable/apt/ubuntu/precise precise main I also tried this, with the same effect: deb http://ppa.moosefs.com/apt/ubuntu/precise precise main When I try to mount a share provided by our 2.x MFS Master, I get: incompatible mfsmaster version I was looking for an announcement on the stable release for this version, but I couldn't find any. Also there seems to be no documentation for the update from 2.x to 3.x. Is it possible that the version 3 was accidentially put into the stable branch? Kind regards, Philipp |
From: Ricardo J. B. <ric...@do...> - 2016-01-08 16:39:42
|
Hello, Today I updated one of our systems and saw moosefs-client being updated from 2.0.83 to 3.0.64. Is version 3.x considered production ready then? Is is safe to upgrade our other clients too? Also, in a MooseFS 2.x cluster I have 2 missing chunks, mfsmaster logs "chunk XXX_00000001: there are no copies" but I have no idea what files those chunks refer to. How can I find out? (MooseFS 1.x used to log the filename too but it seems 2.x doesn't anymore). Thanks in advance, -- Ricardo J. Barberis Senior SysAdmin / IT Architect DonWeb La Actitud Es Todo www.DonWeb.com |
From: Piotr R. K. <pio...@mo...> - 2015-12-21 14:06:56
|
Hello, please take a look at Best practices (especially point 5: "JBOD and XFS for Chunkservers"): https://moosefs.com/documentation/best-practices.html <https://moosefs.com/documentation/best-practices.html> Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 21 Dec 2015, at 10:54 AM, Yves Réveillon - eurower.fr <yve...@eu...> wrote: > > Hi, > > no, dont convert them to RAID0 ! > > If a disk of RAID0 failed, all content of the volume are lost > (https://en.wikipedia.org/wiki/RAID <https://en.wikipedia.org/wiki/RAID>) > > Each disk must be independant (format each disk /dev/sdx for use as one). > > Next, add each disk on MFS. > > MFS will report (mfscgi) if a disk has a problem (latency etc ...), see > log files also and smart. > > With this configuration, if one disk fail, all others are ok with MFS. > > Best regards, > > Yves > > > Alexander Akhobadze a écrit : >> >> >> Hi >> You are right. Convert them into RAID0. >> >> Ахобадзе Александр >> akh...@ri... <mailto:akh...@ri...> <mailto:akh...@ri... <mailto:akh...@ri...>> >> >> ====================================================== >> Вы писали 20 декабря 2015 г., 9:26:56: >> ====================================================== >> >> Hi All >> >> I have a discussion. >> >> Now i used two mfs chuckserver to save Jboss log data . Server A and >> Server B. and mfs had good performance. >> >> Server A : 3 hard disk (each 2T Raid 5) >> Server B : 3 hard disk (each 500G Raid 5) >> >> i have a thought. why not remove server's raid and directly use each >> disk for mfs chuck. >> the reason is: >> 1, Raid is mostly used for keep data safe. But mfs already did it if >> you choose a good copy number. >> 2, Raid let the disk write slower than no raid for each disk. >> 3. Remove raid can got more disk space for mfs chuck. >> >> am i right? hope got response. >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------ >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > > > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Piotr R. K. <pio...@mo...> - 2015-12-21 12:37:37
|
Hello, please take a look at Best practices (especially point 5: "JBOD and XFS for Chunkservers"): https://moosefs.com/documentation/best-practices.html Best regards, -- Piotr Robert Konopelko MooseFS Technical Support Engineer | moosefs.com <https://moosefs.com/> > On 21 Dec 2015, at 10:54 AM, Yves Réveillon - eurower.fr <yve...@eu...> wrote: > > Hi, > > no, dont convert them to RAID0 ! > > If a disk of RAID0 failed, all content of the volume are lost > (https://en.wikipedia.org/wiki/RAID <https://en.wikipedia.org/wiki/RAID>) > > Each disk must be independant (format each disk /dev/sdx for use as one). > > Next, add each disk on MFS. > > MFS will report (mfscgi) if a disk has a problem (latency etc ...), see > log files also and smart. > > With this configuration, if one disk fail, all others are ok with MFS. > > Best regards, > > Yves > > > Alexander Akhobadze a écrit : >> >> >> Hi >> You are right. Convert them into RAID0. >> >> Ахобадзе Александр >> akh...@ri... <mailto:akh...@ri...> <mailto:akh...@ri... <mailto:akh...@ri...>> >> >> ====================================================== >> Вы писали 20 декабря 2015 г., 9:26:56: >> ====================================================== >> >> Hi All >> >> I have a discussion. >> >> Now i used two mfs chuckserver to save Jboss log data . Server A and >> Server B. and mfs had good performance. >> >> Server A : 3 hard disk (each 2T Raid 5) >> Server B : 3 hard disk (each 500G Raid 5) >> >> i have a thought. why not remove server's raid and directly use each >> disk for mfs chuck. >> the reason is: >> 1, Raid is mostly used for keep data safe. But mfs already did it if >> you choose a good copy number. >> 2, Raid let the disk write slower than no raid for each disk. >> 3. Remove raid can got more disk space for mfs chuck. >> >> am i right? hope got response. >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------ >> >> _________________________________________ >> moosefs-users mailing list >> moo...@li... <mailto:moo...@li...> >> https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> >> > > > > ------------------------------------------------------------------------------ > _________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users <https://lists.sourceforge.net/lists/listinfo/moosefs-users> |
From: Yves R. - eurower.f. <yve...@eu...> - 2015-12-21 10:19:59
|
Hi, no, dont convert them to RAID0 ! If a disk of RAID0 failed, all content of the volume are lost (https://en.wikipedia.org/wiki/RAID) Each disk must be independant (format each disk /dev/sdx for use as one). Next, add each disk on MFS. MFS will report (mfscgi) if a disk has a problem (latency etc ...), see log files also and smart. With this configuration, if one disk fail, all others are ok with MFS. Best regards, Yves Alexander Akhobadze a écrit : > > > Hi > You are right. Convert them into RAID0. > > Ахобадзе Александр > akh...@ri... <mailto:akh...@ri...> > > ====================================================== > Вы писали 20 декабря 2015 г., 9:26:56: > ====================================================== > > Hi All > > I have a discussion. > > Now i used two mfs chuckserver to save Jboss log data . Server A and > Server B. and mfs had good performance. > > Server A : 3 hard disk (each 2T Raid 5) > Server B : 3 hard disk (each 500G Raid 5) > > i have a thought. why not remove server's raid and directly use each > disk for mfs chuck. > the reason is: > 1, Raid is mostly used for keep data safe. But mfs already did it if > you choose a good copy number. > 2, Raid let the disk write slower than no raid for each disk. > 3. Remove raid can got more disk space for mfs chuck. > > am i right? hope got response. > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Alexander A. <akh...@ri...> - 2015-12-21 07:07:08
|
<html><head><title>Re: ___SPAM___[MooseFS-Users] [About] MFS disk use discussion (Raid or no Raid)</title> <META http-equiv=Content-Type content="text/html; charset=gb2312"> </head> <body> <span style=" font-family:'Courier New'; font-size: 9pt;"><br><br> Hi<br> You are right. Convert them into RAID0.<br> <br> Ахобадзе Александр<br> <a href="mailto:akh...@ri...">akh...@ri...</a><br> <br> ======================================================<br> Вы писали 20 декабря 2015 г., 9:26:56:<br> ======================================================<br> <br> <span style=" font-family:'arial'; font-size: 11pt;">Hi All<br> <span style=" font-family:'Courier New'; font-size: 9pt;"><br> <span style=" font-family:'arial'; font-size: 11pt;">I have a discussion.<br> <span style=" font-family:'Courier New'; font-size: 9pt;"><br> <span style=" font-family:'arial'; font-size: 11pt;">Now i used two mfs chuckserver to save Jboss log data . Server A and Server B. and mfs had good performance.<br> <span style=" font-family:'Courier New'; font-size: 9pt;"><br> <span style=" font-family:'arial'; font-size: 11pt;">Server A : 3 hard disk (each 2T Raid 5)<br> Server B : 3 hard disk (each 500G Raid 5)<br> <span style=" font-family:'Courier New'; font-size: 9pt;"><br> <span style=" font-family:'arial'; font-size: 11pt;">i have a thought. why not remove server's raid and directly use each disk for mfs chuck.<br> the reason is:<br> 1, Raid is mostly used for keep data safe. But mfs already did it if you choose a good copy number.<br> 2, Raid let the disk write slower than no raid for each disk.<br> 3. Remove raid can got more disk space for mfs chuck.<br> <span style=" font-family:'Courier New'; font-size: 9pt;"><br> <span style=" font-family:'arial'; font-size: 11pt;">am i right? hope got response.</body> |
From: Angus Y. 杨阳 <ang...@vi...> - 2015-12-20 06:27:06
|
Hi All I have a discussion. Now i used two mfs chuckserver to save Jboss log data . Server A and Server B. and mfs had good performance. Server A : 3 hard disk (each 2T Raid 5) Server B : 3 hard disk (each 500G Raid 5) i have a thought. why not remove server's raid and directly use each disk for mfs chuck. the reason is: 1, Raid is mostly used for keep data safe. But mfs already did it if you choose a good copy number. 2, Raid let the disk write slower than no raid for each disk. 3. Remove raid can got more disk space for mfs chuck. am i right? hope got response. |
From: Aleksander W. <ale...@mo...> - 2015-12-18 06:36:15
|
Hi, Would you be so kind end send us some system logs from master server and chunkservers? Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12/17/2015 06:05 PM, Bruno Andrade wrote: > Hi there > > We are setting up Moose here in our lab for some testing and we can't > quite understand whats going on. > > Here is our small test scenario: > > We have two servers with 2 SSD disks each (480GB each disk) > > Server1: > > Disk1 - Mounted on /mnt/brick1 > Disk2 - Mounted on /mnt/brick2 > > Mounted the moosefsmount on /mnt/moose > > This is also where my master is installed > > Server2: > > Disk1 - Mounted on /mnt/brick1 > Disk2 - Mounted on /mnt/brick2 > > Mounted the moosefsmount on /mnt/moose > > We can successfully see the 1.8TB on /mnt/moose on both servers > > The question is : > > - When our software writes to /mnt/moose, sometimes we see the file > taking up space in the nfs mount and some times not > > - When we looked at the 4 disks we have in the two machines, they are > not being used at all! There are no files inside them and their free > space remain at 480GB each, even though we created a 200GB file > > - How do make it distribute the files and replicate across the 4 > different disks? (we are using the set goal of 2, but don't see > anything being replicated in the disks) > > - In the moose web interface, we still see 1.7TB available. Doesn't it > recognize the files we write to /mnt/moose and distributes it to the > disks and reduces that amount? > > > Any help would be much appreciated > > Thanks > > > > > > > ------------------------------------------------------------------------------ > > > _________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ricardo J. B. <ric...@do...> - 2015-12-17 23:45:56
|
What moosefs version and what OS? What is the output of 'cat /etc/mfs/mfshdd.cfg'? That's wehre you configure where MFS will store its files. Check if you files are being created in /var/lib/mfs or /var/mfs, as I believe one of those is the default location if you don't define anything in mfshdd.cfg. Regards. El Jueves 17/12/2015, Bruno Andrade escribió: > Hi there > > We are setting up Moose here in our lab for some testing and we can't quite > understand whats going on. > > Here is our small test scenario: > > We have two servers with 2 SSD disks each (480GB each disk) > > Server1: > > Disk1 - Mounted on /mnt/brick1 > Disk2 - Mounted on /mnt/brick2 > > Mounted the moosefsmount on /mnt/moose > > This is also where my master is installed > > Server2: > > Disk1 - Mounted on /mnt/brick1 > Disk2 - Mounted on /mnt/brick2 > > Mounted the moosefsmount on /mnt/moose > > We can successfully see the 1.8TB on /mnt/moose on both servers > > The question is : > > - When our software writes to /mnt/moose, sometimes we see the file taking > up space in the nfs mount and some times not > > - When we looked at the 4 disks we have in the two machines, they are not > being used at all! There are no files inside them and their free space > remain at 480GB each, even though we created a 200GB file > > - How do make it distribute the files and replicate across the 4 different > disks? (we are using the set goal of 2, but don't see anything being > replicated in the disks) > > - In the moose web interface, we still see 1.7TB available. Doesn't it > recognize the files we write to /mnt/moose and distributes it to the disks > and reduces that amount? > > > Any help would be much appreciated > > Thanks |
From: Bruno A. <br....@gm...> - 2015-12-17 17:05:52
|
Hi there We are setting up Moose here in our lab for some testing and we can't quite understand whats going on. Here is our small test scenario: We have two servers with 2 SSD disks each (480GB each disk) Server1: Disk1 - Mounted on /mnt/brick1 Disk2 - Mounted on /mnt/brick2 Mounted the moosefsmount on /mnt/moose This is also where my master is installed Server2: Disk1 - Mounted on /mnt/brick1 Disk2 - Mounted on /mnt/brick2 Mounted the moosefsmount on /mnt/moose We can successfully see the 1.8TB on /mnt/moose on both servers The question is : - When our software writes to /mnt/moose, sometimes we see the file taking up space in the nfs mount and some times not - When we looked at the 4 disks we have in the two machines, they are not being used at all! There are no files inside them and their free space remain at 480GB each, even though we created a 200GB file - How do make it distribute the files and replicate across the 4 different disks? (we are using the set goal of 2, but don't see anything being replicated in the disks) - In the moose web interface, we still see 1.7TB available. Doesn't it recognize the files we write to /mnt/moose and distributes it to the disks and reduces that amount? Any help would be much appreciated Thanks |
From: f. <fa...@eb...> - 2015-12-09 09:34:18
|
hi again, two HDD rpm 15K 300G RAID1 .i don't think problem in hdd performance. i change master_timeout=120 and watch it. Thank you for your time 2015-12-09 方垚| (8610) 62368638-8906 发件人: Aleksander Wieliczko 发送时间: 2015-12-09 16:49:24 收件人: fangyao; moosefs-users 抄送: 主题: Re: [MooseFS-Users] del and add Hi again, Yes you can change this parameter inside /etc/mfs/mfschunkserver.cfg for test: MASTER_TIMEOUT = 120 Can you check your HDD performance? A specially place where metadata are saved. Default it is /var/lib/mfs Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 12/09/2015 09:07 AM, fangyao wrote: hi. Dell R420 CPU:Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz RAM:48G mfsmaster used 19G about 68714195 goal 2 files total 25 TiB; this is snapshoot on master dumps metadata PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9058 mfs 6 -19 19.4g 19g 3096 R 100.0 41.2 0:06.51 mfsmaster 18477 mfs 0 -19 19.4g 19g 4016 S 2.6 41.2 3925:34 mfsmaster system not uses swap because RAM is enough by the way chunkserver disconnect timeout can setting? 2015-12-09 方垚| (8610) 62368638-8906 发件人: Aleksander Wieliczko 发送时间: 2015-12-09 15:31:43 收件人: fangyao; moosefs-users 抄送: 主题: Re: [MooseFS-Users] del and add Hi. Thank you for this information. First problem that we notice, is that your chunkservers were disconnected at Dec 9 08:01:36. This kind of behaviour may indicate that you mfsmaster process is working really hard. Second information from you syslog was metadata store time. It's about 100 seconds. Combining this informations together we can assume that you may experience SWAPPING problem. Every hour master dumps metadata as a separate subprocess, so it needs more RAM during this operation (about 10-20% more than usual). So can you check this on your mfsmaster server: - Amount of RAM installed in your hardware. - Amount of RAM used by master. Also you can run top at full hour and check if your system uses swap? It is possible that you need to increase amount of RAM in your master server. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 12/09/2015 04:39 AM, fangyao wrote: hi. all server MooseFS version 2.0.80-1 and we edited editchunks.c // syslog(LOG_WARNING,"chunkserver has nonexistent chunk (%016"PRIX64"_%08"PRIX32"), so create it for future deletion",chunkid,version); syslog Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: main master server module: (ip:192.168.1.46) write error: EPIPE (Broken pipe) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.82:9422,5), but server is still connected Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: can't accept chunkserver (ip: 192.168.1.82 / port: 9422) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.80 / port: 9422, usedspace: 852051292160 (793.53 GiB), totalspace: 2219235217408 (2066.82 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.83 / port: 9422, usedspace: 4585739321344 (4270.80 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.79 / port: 9422, usedspace: 863847518208 (804.52 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.81 / port: 9422, usedspace: 860428738560 (801.34 GiB), totalspace: 2240201351168 (2086.35 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.84 / port: 9422, usedspace: 4586834935808 (4271.82 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.78 / port: 9422, usedspace: 864117645312 (804.77 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.84:9422,7) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.84 / port: 9422, usedspace: 4586834935808 (4271.82 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.81:9422,4) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.81 / port: 9422, usedspace: 860428738560 (801.34 GiB), totalspace: 2240201351168 (2086.35 GiB) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.79:9422,2) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.79 / port: 9422, usedspace: 863847518208 (804.52 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.80:9422,3) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.80 / port: 9422, usedspace: 852052361216 (793.54 GiB), totalspace: 2219235217408 (2066.82 GiB) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.83:9422,6) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.83 / port: 9422, usedspace: 4585739321344 (4270.80 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.78:9422,1) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.78 / port: 9422, usedspace: 864117645312 (804.77 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: child finished Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.82:9422,5) Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: store process has finished - store time: 99.380 Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.78 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.82 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.84 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.81 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.79 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.83 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.80 / port: 9422 has been fully removed from data structures Dec 9 08:02:20 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.79 / port: 9422 Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.78 / port: 9422 Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.80 / port: 9422 Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.81 / port: 9422 Dec 9 08:03:17 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.83 / port: 9422 Dec 9 08:03:19 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.84 / port: 9422 Dec 9 08:03:20 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.82 / port: 9422 Dec 9 08:26:01 mfsmaster1 mfsmaster[18477]: structure check loop 2015-12-09 方垚| (8610) 62368638-8906 发件人: Aleksander Wieliczko 发送时间: 2015-12-08 18:52:46 收件人: fangyao; moosefs-users 抄送: 主题: Re: [MooseFS-Users] del and add Hi. Would you be so kind and send us some more details from syslog and tell us what MooseFS version you have? This is to small amount of information to draw some conclusions. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 12/08/2015 11:06 AM, fangyao wrote: [root@mfsmaster1 data]# grep 63946327 changelog.*mfs changelog.11.mfs:18378184425: 1449525706|CHUNKDEL(63946327,1) changelog.11.mfs:18378260221: 1449525792|CHUNKADD(63946327,1,1450130592) changelog.21.mfs:18377349098: 1449489681|CHUNKDEL(63946327,1) changelog.21.mfs:18377425708: 1449489774|CHUNKADD(63946327,1,1450094574) changelog.28.mfs:18376418578: 1449464464|CHUNKDEL(63946327,1) changelog.28.mfs:18376495479: 1449464550|CHUNKADD(63946327,1,1450069350) changelog.35.mfs:18375745795: 1449439281|CHUNKDEL(63946327,1) changelog.35.mfs:18375823198: 1449439371|CHUNKADD(63946327,1,1450044171) changelog.37.mfs:18375586289: 1449432091|CHUNKDEL(63946327,1) changelog.37.mfs:18375662761: 1449432181|CHUNKADD(63946327,1,1450036981) changelog.3.mfs:18378924313: 1449554475|CHUNKDEL(63946327,1) changelog.3.mfs:18379012240: 1449554572|CHUNKADD(63946327,1,1450159372) changelog.45.mfs:18374453427: 1449403276|CHUNKDEL(63946327,1) changelog.45.mfs:18374529796: 1449403358|CHUNKADD(63946327,1,1450008158) changelog.6.mfs:18378583919: 1449543696|CHUNKDEL(63946327,1) changelog.6.mfs:18378669372: 1449543789|CHUNKADD(63946327,1,1450148589) this log be write when EPIPE (Broken pipe) and monitor locked unused file Ascending never del almost EPIPE when dump and iowait not high per second 15 we want solve problem EPIPE thx ------------------------------------------------------------------------------ Go from Idea to Many App Stores Faster with Intel(R) XDK Give your users amazing mobile app experiences with Intel(R) XDK. Use one codebase in this all-in-one HTML5 development environment. Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs. http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140 _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Aleksander W. <ale...@mo...> - 2015-12-09 08:49:29
|
Hi again, Yes you can change this parameter inside /etc/mfs/mfschunkserver.cfg for test: MASTER_TIMEOUT = 120 Can you check your HDD performance? A specially place where metadata are saved. Default it is /var/lib/mfs Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 12/09/2015 09:07 AM, fangyao wrote: > hi. > Dell R420 > CPU:Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz > RAM:48G > mfsmaster used 19G about 68714195 goal 2 files total 25 TiB; > > this is snapshoot on master dumps metadata > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 9058 mfs 6 -19 19.4g 19g 3096 R 100.0 41.2 0:06.51 mfsmaster > 18477 mfs 0 -19 19.4g 19g 4016 S 2.6 41.2 3925:34 mfsmaster > > > system not uses swap because RAM is enough > > by the way > chunkserver disconnect timeout can setting? > > > > 2015-12-09 > ------------------------------------------------------------------------ > *方垚| (8610) 62368638-8906* > ------------------------------------------------------------------------ > *发件人:* Aleksander Wieliczko > *发送时间:* 2015-12-09 15:31:43 > *收件人:* fangyao; moosefs-users > *抄送:* > *主题:* Re: [MooseFS-Users] del and add > Hi. > Thank you for this information. > > First problem that we notice, is that your chunkservers were > disconnected at Dec 9 08:01:36. This kind of behaviour may indicate > that you mfsmaster process is working really hard. > Second information from you syslog was metadata store time. It's about > 100 seconds. > Combining this informations together we can assume that you may > experience SWAPPING problem. > Every hour master dumps metadata as a separate subprocess, so it needs > more RAM during this operation (about 10-20% more than usual). > > So can you check this on your mfsmaster server: > > - Amount of RAM installed in your hardware. > - Amount of RAM used by master. > > Also you can run top at full hour and check if your system uses swap? > It is possible that you need to increase amount of RAM in your master > server. > > Best regards > Aleksander Wieliczko > Technical Support Engineer > MooseFS.com <moosefs.com> > On 12/09/2015 04:39 AM, fangyao wrote: >> hi. >> all server MooseFS version 2.0.80-1 and we edited editchunks.c >> // >> syslog(LOG_WARNING,"chunkserver has nonexistent chunk (%016"PRIX64"_%08"PRIX32"), so create it for future deletion",chunkid,version); >> >> >> syslog >> >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: main master server module: (ip:192.168.1.46) write error: EPIPE (Broken pipe) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.82:9422,5), but server is still connected >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: can't accept chunkserver (ip: 192.168.1.82 / port: 9422) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.80 / port: 9422, usedspace: 852051292160 (793.53 GiB), totalspace: 2219235217408 (2066.82 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.83 / port: 9422, usedspace: 4585739321344 (4270.80 GiB), totalspace: 12073407995904 (11244.24 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.79 / port: 9422, usedspace: 863847518208 (804.52 GiB), totalspace: 2249241448448 (2094.77 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.81 / port: 9422, usedspace: 860428738560 (801.34 GiB), totalspace: 2240201351168 (2086.35 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.84 / port: 9422, usedspace: 4586834935808 (4271.82 GiB), totalspace: 12073407995904 (11244.24 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.78 / port: 9422, usedspace: 864117645312 (804.77 GiB), totalspace: 2249241448448 (2094.77 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.84:9422,7) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.84 / port: 9422, usedspace: 4586834935808 (4271.82 GiB), totalspace: 12073407995904 (11244.24 GiB) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.81:9422,4) >> Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.81 / port: 9422, usedspace: 860428738560 (801.34 GiB), totalspace: 2240201351168 (2086.35 GiB) >> Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.79:9422,2) >> Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.79 / port: 9422, usedspace: 863847518208 (804.52 GiB), totalspace: 2249241448448 (2094.77 GiB) >> Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.80:9422,3) >> Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.80 / port: 9422, usedspace: 852052361216 (793.54 GiB), totalspace: 2219235217408 (2066.82 GiB) >> Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.83:9422,6) >> Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.83 / port: 9422, usedspace: 4585739321344 (4270.80 GiB), totalspace: 12073407995904 (11244.24 GiB) >> Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.78:9422,1) >> Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.78 / port: 9422, usedspace: 864117645312 (804.77 GiB), totalspace: 2249241448448 (2094.77 GiB) >> Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: child finished >> Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.82:9422,5) >> Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) >> Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: store process has finished - store time: 99.380 >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.78 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.82 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.84 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.81 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.79 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.83 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.80 / port: 9422 has been fully removed from data structures >> Dec 9 08:02:20 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.79 / port: 9422 >> Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.78 / port: 9422 >> Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.80 / port: 9422 >> Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.81 / port: 9422 >> Dec 9 08:03:17 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.83 / port: 9422 >> Dec 9 08:03:19 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.84 / port: 9422 >> Dec 9 08:03:20 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.82 / port: 9422 >> Dec 9 08:26:01 mfsmaster1 mfsmaster[18477]: structure check loop >> >> >> >> 2015-12-09 >> ------------------------------------------------------------------------ >> *方垚| (8610) 62368638-8906* >> ------------------------------------------------------------------------ >> *发件人:* Aleksander Wieliczko >> *发送时间:* 2015-12-08 18:52:46 >> *收件人:* fangyao; moosefs-users >> *抄送:* >> *主题:* Re: [MooseFS-Users] del and add >> Hi. >> Would you be so kind and send us some more details from syslog and >> tell us what MooseFS version you have? >> This is to small amount of information to draw some conclusions. >> >> >> Best regards >> Aleksander Wieliczko >> Technical Support Engineer >> MooseFS.com <moosefs.com> >> On 12/08/2015 11:06 AM, fangyao wrote: >>> [root@mfsmaster1 data]# grep 63946327 changelog.*mfs >>> changelog.11.mfs:18378184425: 1449525706|CHUNKDEL(63946327,1) >>> changelog.11.mfs:18378260221: 1449525792|CHUNKADD(63946327,1,1450130592) >>> changelog.21.mfs:18377349098: 1449489681|CHUNKDEL(63946327,1) >>> changelog.21.mfs:18377425708: 1449489774|CHUNKADD(63946327,1,1450094574) >>> changelog.28.mfs:18376418578: 1449464464|CHUNKDEL(63946327,1) >>> changelog.28.mfs:18376495479: 1449464550|CHUNKADD(63946327,1,1450069350) >>> changelog.35.mfs:18375745795: 1449439281|CHUNKDEL(63946327,1) >>> changelog.35.mfs:18375823198: 1449439371|CHUNKADD(63946327,1,1450044171) >>> changelog.37.mfs:18375586289: 1449432091|CHUNKDEL(63946327,1) >>> changelog.37.mfs:18375662761: 1449432181|CHUNKADD(63946327,1,1450036981) >>> changelog.3.mfs:18378924313: 1449554475|CHUNKDEL(63946327,1) >>> changelog.3.mfs:18379012240: 1449554572|CHUNKADD(63946327,1,1450159372) >>> changelog.45.mfs:18374453427: 1449403276|CHUNKDEL(63946327,1) >>> changelog.45.mfs:18374529796: 1449403358|CHUNKADD(63946327,1,1450008158) >>> changelog.6.mfs:18378583919: 1449543696|CHUNKDEL(63946327,1) >>> changelog.6.mfs:18378669372: 1449543789|CHUNKADD(63946327,1,1450148589) >>> >>> this log be write when EPIPE (Broken pipe) and monitor locked unused file Ascending never del >>> almost EPIPE when dump and iowait not high per second 15 >>> we want solve problem EPIPE >>> thx >>> >>> >>> ------------------------------------------------------------------------------ >>> Go from Idea to Many App Stores Faster with Intel(R) XDK >>> Give your users amazing mobile app experiences with Intel(R) XDK. >>> Use one codebase in this all-in-one HTML5 development environment. >>> Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs. >>> http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140 >>> >>> >>> _________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > |
From: f. <fa...@eb...> - 2015-12-09 08:08:12
|
hi. Dell R420 CPU:Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz RAM:48G mfsmaster used 19G about 68714195 goal 2 files total 25 TiB; this is snapshoot on master dumps metadata PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9058 mfs 6 -19 19.4g 19g 3096 R 100.0 41.2 0:06.51 mfsmaster 18477 mfs 0 -19 19.4g 19g 4016 S 2.6 41.2 3925:34 mfsmaster system not uses swap because RAM is enough by the way chunkserver disconnect timeout can setting? 2015-12-09 方垚| (8610) 62368638-8906 发件人: Aleksander Wieliczko 发送时间: 2015-12-09 15:31:43 收件人: fangyao; moosefs-users 抄送: 主题: Re: [MooseFS-Users] del and add Hi. Thank you for this information. First problem that we notice, is that your chunkservers were disconnected at Dec 9 08:01:36. This kind of behaviour may indicate that you mfsmaster process is working really hard. Second information from you syslog was metadata store time. It's about 100 seconds. Combining this informations together we can assume that you may experience SWAPPING problem. Every hour master dumps metadata as a separate subprocess, so it needs more RAM during this operation (about 10-20% more than usual). So can you check this on your mfsmaster server: - Amount of RAM installed in your hardware. - Amount of RAM used by master. Also you can run top at full hour and check if your system uses swap? It is possible that you need to increase amount of RAM in your master server. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 12/09/2015 04:39 AM, fangyao wrote: hi. all server MooseFS version 2.0.80-1 and we edited editchunks.c // syslog(LOG_WARNING,"chunkserver has nonexistent chunk (%016"PRIX64"_%08"PRIX32"), so create it for future deletion",chunkid,version); syslog Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: main master server module: (ip:192.168.1.46) write error: EPIPE (Broken pipe) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.82:9422,5), but server is still connected Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: can't accept chunkserver (ip: 192.168.1.82 / port: 9422) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.80 / port: 9422, usedspace: 852051292160 (793.53 GiB), totalspace: 2219235217408 (2066.82 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.83 / port: 9422, usedspace: 4585739321344 (4270.80 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.79 / port: 9422, usedspace: 863847518208 (804.52 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.81 / port: 9422, usedspace: 860428738560 (801.34 GiB), totalspace: 2240201351168 (2086.35 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.84 / port: 9422, usedspace: 4586834935808 (4271.82 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver disconnected - ip: 192.168.1.78 / port: 9422, usedspace: 864117645312 (804.77 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.84:9422,7) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.84 / port: 9422, usedspace: 4586834935808 (4271.82 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.81:9422,4) Dec 9 08:01:36 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.81 / port: 9422, usedspace: 860428738560 (801.34 GiB), totalspace: 2240201351168 (2086.35 GiB) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.79:9422,2) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.79 / port: 9422, usedspace: 863847518208 (804.52 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.80:9422,3) Dec 9 08:01:37 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.80 / port: 9422, usedspace: 852052361216 (793.54 GiB), totalspace: 2219235217408 (2066.82 GiB) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.83:9422,6) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.83 / port: 9422, usedspace: 4585739321344 (4270.80 GiB), totalspace: 12073407995904 (11244.24 GiB) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.78:9422,1) Dec 9 08:01:38 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.78 / port: 9422, usedspace: 864117645312 (804.77 GiB), totalspace: 2249241448448 (2094.77 GiB) Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: child finished Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: csdb: found cs using ip:port and csid (192.168.1.82:9422,5) Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: chunkserver register begin (packet version: 6) - ip: 192.168.1.82 / port: 9422, usedspace: 4585512808448 (4270.59 GiB), totalspace: 12073950502912 (11244.74 GiB) Dec 9 08:01:40 mfsmaster1 mfsmaster[18477]: store process has finished - store time: 99.380 Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.78 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.82 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.84 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.81 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.79 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.83 / port: 9422 has been fully removed from data structures Dec 9 08:02:18 mfsmaster1 mfsmaster[18477]: server ip: 192.168.1.80 / port: 9422 has been fully removed from data structures Dec 9 08:02:20 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.79 / port: 9422 Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.78 / port: 9422 Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.80 / port: 9422 Dec 9 08:02:32 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.81 / port: 9422 Dec 9 08:03:17 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.83 / port: 9422 Dec 9 08:03:19 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.84 / port: 9422 Dec 9 08:03:20 mfsmaster1 mfsmaster[18477]: chunkserver register end (packet version: 6) - ip: 192.168.1.82 / port: 9422 Dec 9 08:26:01 mfsmaster1 mfsmaster[18477]: structure check loop 2015-12-09 方垚| (8610) 62368638-8906 发件人: Aleksander Wieliczko 发送时间: 2015-12-08 18:52:46 收件人: fangyao; moosefs-users 抄送: 主题: Re: [MooseFS-Users] del and add Hi. Would you be so kind and send us some more details from syslog and tell us what MooseFS version you have? This is to small amount of information to draw some conclusions. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com On 12/08/2015 11:06 AM, fangyao wrote: [root@mfsmaster1 data]# grep 63946327 changelog.*mfs changelog.11.mfs:18378184425: 1449525706|CHUNKDEL(63946327,1) changelog.11.mfs:18378260221: 1449525792|CHUNKADD(63946327,1,1450130592) changelog.21.mfs:18377349098: 1449489681|CHUNKDEL(63946327,1) changelog.21.mfs:18377425708: 1449489774|CHUNKADD(63946327,1,1450094574) changelog.28.mfs:18376418578: 1449464464|CHUNKDEL(63946327,1) changelog.28.mfs:18376495479: 1449464550|CHUNKADD(63946327,1,1450069350) changelog.35.mfs:18375745795: 1449439281|CHUNKDEL(63946327,1) changelog.35.mfs:18375823198: 1449439371|CHUNKADD(63946327,1,1450044171) changelog.37.mfs:18375586289: 1449432091|CHUNKDEL(63946327,1) changelog.37.mfs:18375662761: 1449432181|CHUNKADD(63946327,1,1450036981) changelog.3.mfs:18378924313: 1449554475|CHUNKDEL(63946327,1) changelog.3.mfs:18379012240: 1449554572|CHUNKADD(63946327,1,1450159372) changelog.45.mfs:18374453427: 1449403276|CHUNKDEL(63946327,1) changelog.45.mfs:18374529796: 1449403358|CHUNKADD(63946327,1,1450008158) changelog.6.mfs:18378583919: 1449543696|CHUNKDEL(63946327,1) changelog.6.mfs:18378669372: 1449543789|CHUNKADD(63946327,1,1450148589) this log be write when EPIPE (Broken pipe) and monitor locked unused file Ascending never del almost EPIPE when dump and iowait not high per second 15 we want solve problem EPIPE thx ------------------------------------------------------------------------------ Go from Idea to Many App Stores Faster with Intel(R) XDK Give your users amazing mobile app experiences with Intel(R) XDK. Use one codebase in this all-in-one HTML5 development environment. Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs. http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140 _________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |