You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Boyko Y. <b.y...@ex...> - 2011-03-21 11:10:29
|
Hi list, I'm wondering how are you guys handling mfs master failover? In my tests mfsmetalogger seems quite unreliable - 2 days of testing showed a few cases when mfsmetarestore is unable to restore the metadata.mfs datafile - getting different errors like Data mismatch, version mismatch, hole in change files (add more files) etc. Running 3 different metadata backup loggers, master and chunk servers all running mfs-1.6.20-2 on centos 5.5 x86_64, filesystem type is ext3. I'm aware that some of you are running huge clusters with terabytes of data - I'm wondering how do you trust your mfsmaster and am I the only one concerned with eventual data loss on mfsmaster failover, when mfsmetarestore does not properly restore the metadata.mfs file from changelogs? Boyko |
From: R.C. <mil...@gm...> - 2011-03-21 09:43:21
|
Thank you so much Michal! I'm going to start working on this shortly. Raf ----- Original Message ----- From: "Michal Borychowski" <mic...@ge...> To: "'R.C.'" <mil...@gm...> Cc: <moo...@li...> Sent: Monday, March 21, 2011 9:47 AM Subject: RE: [Moosefs-users] Mac OS X mounting hints Hi Raf! Generally speaking everything works in a quite regular way on Max OS X as on other *nix platforms. Although it may happen that Max OS X will cache some attributes even though it is forbidden by mfsmount. It shouldn't be a big problem, but if it is, you can try option "novncache" or "noubc". You may want to have dedicated start script for mfsmount. We use the following solution. In "/Library/StartupItems" we created subfolder "mfsmount" and put there two files: StartupParameters.plist: { Description = "MFS mounter"; Provides = ("MFS"); Requires = ("Network"); OrderPreference = "Late"; } And a script "mfsmount": #!/bin/sh /usr/sbin/mfsmount /mnt/mfs We also had to copy "mfsmount" from the default folder to /usr/bin (cp /usr/local/bin/mfsmount /usr/sbin/). Or you may try to change the path in "mfsount" file. Hope it helps Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: R.C. [mailto:mil...@gm...] Sent: Tuesday, March 15, 2011 4:00 PM To: moo...@li... Subject: [Moosefs-users] Mac OS X mounting hints As stated here: http://www.moosefs.org/news-reader/items/MFS_running_on_Mac_OS_X..html MooseFS native client can be mounted on OS X. Is there anyone having experience in this field that can contribute some hints for this task? Thank you Bye Raf ---------------------------------------------------------------------------- -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-03-21 09:22:51
|
Hi! Lazy umount should work with MooseFS (if the oparating system supports is). You just need to try it, eg.: "umount -l /mnt/mfs ; mfsmount /mnt/mfs" Regards Michal -----Original Message----- From: Flow Jiang [mailto:fl...@gm...] Sent: Thursday, March 17, 2011 3:57 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Reserved files remain forever Great! Looking forward to try MFS 1.6.21. Would it also work with lazy umount (umount -l)? Seems that AutoFS will use lazy umount under some circumstances like a forced shutdown / reboot, but I'm not so sure about this. Thanks Flow On 03/16/2011 10:02 PM, Michal Borychowski wrote: > Hi! > > We added a small change in 1.6.21, ie. while umounting, mfsmount will now > send a "close session" command to the master. Such properly closed sessions > would not be "hanging" throughout this 7-day period. > > > Kind regards > Michal > > > -----Original Message----- > From: Flow Jiang [mailto:fl...@gm...] > Sent: Tuesday, March 01, 2011 5:06 PM > To: moo...@li... > Subject: Re: [Moosefs-users] Reserved files remain forever > > O.K. I think I lied in the last post. I changed all our configuration to > use fstab mount the FS permanently and use "-fstype=bind" for AutoFS to > link it to user home. But I still have more than 100 reserved files > today, most of them look like temp files. (like > .local|share|gvfs-metadata|home-8a63e833.log) > > The interesting thing is that we have nightly builds running on > non-UserHome MooseFS mounted directory, which also creates / deletes > temp files, but non of them gets reserved. So I suspect this might be > caused by the way applications write file. > > So my question is that does any one has the similar issue when MooseFS > is serving as home? Do you have reserved files remaining for a long > time? And they will be deleted after 1 week, right? > > BTW, the "-fstype=bind" work around do fixes the Core Dump issue. > > Thanks > Flow > > On 02/27/2011 01:20 AM, Flow Jiang wrote: >> Hi, >> >> I now know how to re-create the issue and it should also be related to >> AutoFS. >> >> I noticed that if I make a new file with non-zero size and delete it >> immediately, it will be set as reserved file and keep at this state >> for a while (about 1 minute in our environment). If the underlying >> MooseFS is unmounted within this short period, the behavior differs >> between fstab mounted FS and AufoFS mounted FS. >> >> * If the FS is mounted by fstab, create and delete a file, then >> unmount the FS, the file eventually gets deleted >> * If the FS is mounted by autofs, create and delete a file, then >> unmount the FS, the file remains as reserved forever >> >> And unfortunately we are relying on the timeout feature of AutoFS so >> unmount happens frequently. This is the reason we have so many cache >> files (created then deleted during one mount session) remaining as >> reserved files forever. >> >> Our AutoFS mount option is mentioned in the other thread >> > (http://sourceforge.net/mailarchive/forum.php?thread_name=4D68E18F.3000708%4 > 0gmail.com&forum_name=moosefs-users). >> And the issue looks like the same: >> >> * If the FS is mounted by fstab, writing a file (> 1M) to some folder >> then unmount it, works O.K. >> * If the FS is mounted by autofs, writing a file (> 1M) to some folder >> then unmount it, core file gets dumped >> >> Increasing autofs timeout would help for the reserved file issue but >> not for the core dump issue. And even if the timeout is increased, a >> restart / power off of the system still won't leave enough time for >> the file to be totally deleted. >> >> I do have a work around to fix the 2 issues temporarily, by mounting >> the entire MFS system with fstab and use "-fstype=bind" option in >> autofs to make auto mounting/unmounting happen. But this is >> complicated and different mfs mount options can't be set with >> different subfolders. So I do hope I can have native AutoFS supported >> MooseFS mounts. >> >> Can any one help on this? >> >> Many Thanks >> Flow >> >> On 02/22/2011 12:03 AM, Flow Jiang wrote: >>> Hi, >>> >>> Just found another issue. I cleared about 10000 reserved files with >>> the script provided at >>> > http://sourceforge.net/tracker/?func=detail&aid=3104619&group_id=228631&atid > =1075722 >>> yesterday, and this morning I had 0 reserved file when started working. >>> >>> However, after one day development activity with 6 workstations, now >>> we have 204 reserved files not deleted. I've noticed it's stated that >>> "Each session after two hours is automatically closed and all the >>> files are released." in above link, but seems it's not happening in >>> our environment. We have CentOS 5.5 x86 servers and run mfsmount on >>> Fedora 12 x64 workstations. Both servers and workstations run mfs >>> 1.6.19. And mfs is serving as home with read / write access. >>> >>> Here are some example of the reserved files by reading the metadata: >>> >>> > 00067856|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|inde > x.sqlite-journal >>> > 00067857|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|inde > x.sqlite-journal >>> >>> Most of the 204 reserved files look like temp / journal files. >>> >>> Any ideas about the cause of the issue? >>> >>> BTW, OpenOffice fails to start if MFS serves as home directory. It >>> should be a FS bug as stated on: >>> http://qa.openoffice.org/issues/show_bug.cgi?id=113207. Would it be >>> related to the issue above? And can we fix this OOo issue? >>> >>> Many Thanks >>> Flow >>> >>> >>> >>> >>> >>> >>> > ---------------------------------------------------------------------------- > -- > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT > data > generated by your applications, servers and devices whether physical, > virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > ---------------------------------------------------------------------------- -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-03-21 09:09:22
|
Hi! Please read in the archive this question: <http://sourceforge.net/mailarchive/message.php?msg_id=26893940> http://sourceforge.net/mailarchive/message.php?msg_id=26893940 and my reply to it: http://sourceforge.net/mailarchive/message.php?msg_id=26897668 Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: mung ru tsai [mailto:vt...@gm...] Sent: Friday, March 18, 2011 10:42 AM To: moo...@li... Subject: [Moosefs-users] different chunk disk size First thanks for look~~ my problem is if my i have 5 disk get from 5 chunk server A:1T B:1T C:1T D:1T E:2T then i set goal=2 ,how does mfs store the data when A~B is full? thank you for replaying!! |
From: Michal B. <mic...@ge...> - 2011-03-21 08:47:42
|
Hi Raf! Generally speaking everything works in a quite regular way on Max OS X as on other *nix platforms. Although it may happen that Max OS X will cache some attributes even though it is forbidden by mfsmount. It shouldn't be a big problem, but if it is, you can try option "novncache" or "noubc". You may want to have dedicated start script for mfsmount. We use the following solution. In "/Library/StartupItems" we created subfolder "mfsmount" and put there two files: StartupParameters.plist: { Description = "MFS mounter"; Provides = ("MFS"); Requires = ("Network"); OrderPreference = "Late"; } And a script "mfsmount": #!/bin/sh /usr/sbin/mfsmount /mnt/mfs We also had to copy "mfsmount" from the default folder to /usr/bin (cp /usr/local/bin/mfsmount /usr/sbin/). Or you may try to change the path in "mfsount" file. Hope it helps Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: R.C. [mailto:mil...@gm...] Sent: Tuesday, March 15, 2011 4:00 PM To: moo...@li... Subject: [Moosefs-users] Mac OS X mounting hints As stated here: http://www.moosefs.org/news-reader/items/MFS_running_on_Mac_OS_X..html MooseFS native client can be mounted on OS X. Is there anyone having experience in this field that can contribute some hints for this task? Thank you Bye Raf ---------------------------------------------------------------------------- -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boyko Y. <b.y...@ex...> - 2011-03-20 12:45:36
|
Hello! I've been using moosefs for a while. I have 3 metadata backup loggers running. I noticed that if I kill mfsmaster process on the master node (simulating power failure), mfsmetalogger crashes (segfault) on the metadata logger node. Here are logs entries: Mar 20 11:45:35 server110 mfsmetalogger[6546]: metadata downloaded 72105B/0.009982s (7.224 MB/s) Mar 20 11:45:35 server110 mfsmetalogger[6546]: changelog_0 downloaded 0B/0.000001s (0.000 MB/s) Mar 20 11:45:35 server110 mfsmetalogger[6546]: changelog_1 downloaded 164193B/0.015491s (10.599 MB/s) Mar 20 11:45:35 server110 mfsmetalogger[6546]: sessions downloaded 3050B/0.001501s (2.032 MB/s) Mar 20 11:46:03 server110 mfsmetalogger[6546]: sessions downloaded 3050B/0.001497s (2.037 MB/s) Mar 20 11:47:00 server110 mfsmetalogger[6546]: sessions downloaded 3050B/0.001246s (2.448 MB/s) Mar 20 11:48:48 server110 mfsmetalogger[6546]: sessions downloaded 3050B/0.001009s (3.023 MB/s) Mar 20 11:48:48 server110 mfsmetalogger[6546]: connection was reset by Master Mar 20 11:49:00 server110 mfsmetalogger[6546]: connecting ... Mar 20 11:49:00 server110 mfsmetalogger[6546]: connection failed, error: ECONNREFUSED (Connection refused) Mar 20 11:49:05 server110 mfsmetalogger[6546]: connecting ... Mar 20 11:49:05 server110 mfsmetalogger[6546]: connection failed, error: ECONNREFUSED (Connection refused) Mar 20 11:49:06 server110 kernel: mfsmetalogger[6546]: segfault at 0000000000000060 rip 000000318c26119d rsp 00007fff2f368170 error 4 from another metadata logger: Mar 20 13:33:00 server102 mfsmetalogger[5088]: sessions downloaded 3388B/0.000993s (3.412 MB/s) Mar 20 13:34:00 server102 mfsmetalogger[5088]: sessions downloaded 3388B/0.001000s (3.388 MB/s) Mar 20 13:35:00 server102 mfsmetalogger[5088]: sessions downloaded 3388B/0.001000s (3.388 MB/s) Mar 20 13:35:48 server102 mfsmetalogger[5088]: connection was reset by Master Mar 20 13:35:50 server102 mfsmetalogger[5088]: connecting ... Mar 20 13:35:50 server102 mfsmetalogger[5088]: connection failed, error: ECONNREFUSED (Connection refused) Mar 20 13:35:55 server102 mfsmetalogger[5088]: connecting ... Mar 20 13:35:55 server102 mfsmetalogger[5088]: connection failed, error: ECONNREFUSED (Connection refused) Mar 20 13:35:56 server102 kernel: mfsmetalogger[5088]: segfault at 0000000000000060 rip 0000003c6386119d rsp 00007fff7d13a7d0 error 4 Mar 20 13:37:23 server102 mfsmetalogger[12676]: set gid to 502 Mar 20 13:37:23 server102 mfsmetalogger[12676]: set uid to 502 Mar 20 13:37:23 server102 mfsmetalogger[12676]: connecting ... Mar 20 13:37:23 server102 mfsmetalogger[12676]: open files limit: 5000 Mar 20 13:37:23 server102 mfsmetalogger[12676]: connected to Master Mar 20 13:37:23 server102 mfsmetalogger[12676]: metadata downloaded 72113B/0.013963s (5.165 MB/s) Mar 20 13:37:23 server102 mfsmetalogger[12676]: changelog_0 downloaded 981876B/0.086934s (11.294 MB/s) Mar 20 13:37:23 server102 mfsmetalogger[12676]: changelog_1 downloaded 164193B/0.015978s (10.276 MB/s) Mar 20 13:37:23 server102 mfsmetalogger[12676]: sessions downloaded 3388B/0.001993s (1.700 MB/s) Mar 20 13:39:00 server102 mfsmetalogger[12676]: sessions downloaded 3388B/0.002965s (1.143 MB/s) Mar 20 13:40:00 server102 mfsmetalogger[12676]: sessions downloaded 3388B/0.001986s (1.706 MB/s) Mar 20 13:41:00 server102 mfsmetalogger[12676]: sessions downloaded 3388B/0.000991s (3.419 MB/s) Mar 20 13:41:23 server102 mfsmetalogger[12676]: connection was reset by Master Mar 20 13:41:25 server102 mfsmetalogger[12676]: connecting ... Mar 20 13:41:25 server102 mfsmetalogger[12676]: connection failed, error: ECONNREFUSED (Connection refused) Mar 20 13:41:30 server102 mfsmetalogger[12676]: connecting ... Mar 20 13:41:30 server102 mfsmetalogger[12676]: connection failed, error: ECONNREFUSED (Connection refused) Mar 20 13:41:31 server102 kernel: mfsmetalogger[12676]: segfault at 0000000000000060 rip 0000003c6386119d rsp 00007fff5207ee20 error 4 Both machines are running centos 5.5, x86_64, mfs-1.6.20-2, same for the master. Also, not sure if related, but while running tests - killing mfsmaster process and trying to restore from a metadata logger - sometimes I am unable to create the metadata.mfs data file, getting the following message: [root@server102 mfs]# mfsmetarestore -a -d /var/lib/mfs file 'metadata.mfs.back' not found - will try 'metadata_ml.mfs.back' instead loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok hole in change files (entries from 791301 to 791305 are missing) - add more files Wondering why are these entries missing. As mfsmetalogger process crashes after the mfsmaster process is killed, can this be related? (btw, I'm building the metadata.mfs file as suggested by Michal Borychowski in another email regarding a bug in moosefs when using snapshots) Can't tell for sure but I think that if I clear the /var/lib/mfs folder (delete all the logs/files) and then start mfsmetalogger clean, there are no issues when restoring metadata.mfs - all goes fine (at least for the 10 times I've tried so far). So the 'add more files' errors may be related to having old changelogs in /var/lig/mfs, can anyone confirm this? Anyone having similar issues? Thanks a lot! Boyko |
From: Robert D. <ro...@in...> - 2011-03-18 23:20:22
|
# mfsmetarestore -x -a /var/mfs/metadata_ml.mfs.back -d /var/mfs/ file 'metadata.mfs.back' not found - will try 'metadata_ml.mfs.back' instead loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok found changelog file 1: /var/mfs/changelog_ml.0.mfs found changelog file 2: /var/mfs/changelog_ml.1.mfs change: 1300482060|FREEINODES():2036 154746233: error: 1 (Operation not permitted) I am the root user, is this a bug? |
From: mung ru t. <vt...@gm...> - 2011-03-18 09:42:51
|
First thanks for look~~ my problem is if my i have 5 disk get from 5 chunk server A:1T B:1T C:1T D:1T E:2T then i set goal=2 ,how does mfs store the data when A~B is full? thank you for replaying!! |
From: Robert D. <ro...@in...> - 2011-03-17 18:27:54
|
I guess I should ask if I should be submitting Nexenta bugs, if this is something the project will support in the future? Either way, looks like I was able to get specific trace information (shown below). Problem: mfschunkserver segfaults after running for a few hours. Output: [New LWP 5] main server module: listen on *:9422 [New LWP 6] [New LWP 7] [New LWP 8] [New LWP 9] [New LWP 10] [New LWP 11] [New LWP 12] [New LWP 13] [New LWP 14] [New LWP 15] [New LWP 16] [New LWP 17] [New LWP 18] [New LWP 19] [New LWP 20] [New LWP 21] [New LWP 22] [New LWP 23] [New LWP 24] [New LWP 25] stats file has been loaded mfschunkserver daemon initialized properly Program received signal SIGPIPE, Broken pipe. 0xfeed57d5 in __write () from /lib/libc.so.1 (gdb) bt #0 0xfeed57d5 in __write () from /lib/libc.so.1 #1 0xfeebf37e in write () from /lib/libc.so.1 #2 0x0804d684 in csserv_write (eptr=0x8a32bf0) at csserv.c:1631 #3 0x0804f604 in csserv_serve (pdesc=0x8033c90) at csserv.c:1900 #4 0x0806208c in mainloop () at ../mfscommon/main.c:348 #5 0x080626fb in main (argc=<value optimized out>, argv=0x40) at ../mfscommon/main.c:1101 |
From: Stéphane B. <ste...@ga...> - 2011-03-17 15:40:43
|
Its starting to really concering me. Every loop I get missing chunks and under-goal files... We have production web server running on this and we cannot afford to have unavailable files. When a file is unavailable this means it's a lost sell. The numbers are getting higher. Maybe the load is too high ? We are having about 6 server with about 1 millions lookup per hours each , 500k opens per hours and 70k read per hours. We also have another mooseFS cluster with the same settings on different machines and we have no under-goal files or missing files. Thanks check loop start time check loop end time files under-goal files missing files chunks under-goal chunks missing chunks Thu Mar 17 11:23:52 2011 Thu Mar 17 15:24:58 2011 2125751 25891 2075 2119886 25891 2075 On 11-03-15 10:24 AM, Stéphane Boisvert wrote: > Thanks for the answer, > > But it happens really often. Maybe every 2 or 3 loops. Maybe it is > some settings wrong ? We have 2 chunkserver ... that means that both > of them was unavailable. I changed the timeout settings a little. > > Here is my settings. > > > Chunkservers: > > MASTER_TIMEOUT = 2 > HDD_TEST_FREQ = 10 > > This is the only 2 settings I changed > > Masters: > > CHUNKS_LOOP_TIME = 60 > CHUNKS_DEL_LIMIT = 5000 > CHUNKS_WRITE_REP_LIMIT = 5 > CHUNKS_READ_REP_LIMIT = 10 > > This the only settings I changed the remaining should be Default > > > More explanation about the setup... > > We have 2 master running Keepalived (same as carp) > the chunk servers are on the same servers as the master > the machines are Dual quad core Xeons with 8G of Ram > and 15k rpm disks in raid 5 > > > > On 11-03-15 08:50 AM, Michal Borychowski wrote: >> Hi! >> >> These are logs from a test loop. It may happen that while the loop is >> running some chunkservers are unavailable. The next loop should show that >> everything is allright. >> >> Unless you have numbers in red in the first column in CGI monitor, >> everything is fine. >> >> >> Kind regards >> Michał Borychowski >> MooseFS Support Manager >> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >> Gemius S.A. >> ul. Wołoska 7, 02-672 Warszawa >> Budynek MARS, klatka D >> Tel.: +4822 874-41-00 >> Fax : +4822 874-41-01 >> >> >> >> -----Original Message----- >> From: Stéphane Boisvert [mailto:ste...@ga...] >> Sent: Thursday, March 10, 2011 8:40 PM >> To: moosefs-users >> Subject: [Moosefs-users] Unavailable Chunks >> >> Hi, >> We have moosefs running 2 months now. And today we got few errors >> on it. The master has done the: structure check loop and it ended with >> 72 unavailable chunks and files. I don't know what this really means. >> The files are still accessible and mfscheckfile output is good too. We >> have about 2 millions files on the mooseFS currently and 2 chunkserver >> with a redundant setup. On this cluster we run the version: 1.6.17 we >> plan to upgrade it soon. >> >> >> Thanks, >> >> Stephane >> >> >> >> >> ---------------------------------------------------------------------------- >> -- >> Colocation vs. Managed Hosting >> A question and answer guide to determining the best fit >> for your organization - today and in the future. >> http://p.sf.net/sfu/internap-sfd2d >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > -- > > > > *Stephane Boisvert* > Unix Administrator > > Msn : ste...@ga... > E-mail :ste...@ga... > > > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- *Stephane Boisvert* Unix Administrator Msn : ste...@ga... E-mail :ste...@ga... |
From: Flow J. <fl...@gm...> - 2011-03-17 14:56:59
|
Great! Looking forward to try MFS 1.6.21. Would it also work with lazy umount (umount -l)? Seems that AutoFS will use lazy umount under some circumstances like a forced shutdown / reboot, but I'm not so sure about this. Thanks Flow On 03/16/2011 10:02 PM, Michal Borychowski wrote: > Hi! > > We added a small change in 1.6.21, ie. while umounting, mfsmount will now > send a "close session" command to the master. Such properly closed sessions > would not be "hanging" throughout this 7-day period. > > > Kind regards > Michal > > > -----Original Message----- > From: Flow Jiang [mailto:fl...@gm...] > Sent: Tuesday, March 01, 2011 5:06 PM > To: moo...@li... > Subject: Re: [Moosefs-users] Reserved files remain forever > > O.K. I think I lied in the last post. I changed all our configuration to > use fstab mount the FS permanently and use "-fstype=bind" for AutoFS to > link it to user home. But I still have more than 100 reserved files > today, most of them look like temp files. (like > .local|share|gvfs-metadata|home-8a63e833.log) > > The interesting thing is that we have nightly builds running on > non-UserHome MooseFS mounted directory, which also creates / deletes > temp files, but non of them gets reserved. So I suspect this might be > caused by the way applications write file. > > So my question is that does any one has the similar issue when MooseFS > is serving as home? Do you have reserved files remaining for a long > time? And they will be deleted after 1 week, right? > > BTW, the "-fstype=bind" work around do fixes the Core Dump issue. > > Thanks > Flow > > On 02/27/2011 01:20 AM, Flow Jiang wrote: >> Hi, >> >> I now know how to re-create the issue and it should also be related to >> AutoFS. >> >> I noticed that if I make a new file with non-zero size and delete it >> immediately, it will be set as reserved file and keep at this state >> for a while (about 1 minute in our environment). If the underlying >> MooseFS is unmounted within this short period, the behavior differs >> between fstab mounted FS and AufoFS mounted FS. >> >> * If the FS is mounted by fstab, create and delete a file, then >> unmount the FS, the file eventually gets deleted >> * If the FS is mounted by autofs, create and delete a file, then >> unmount the FS, the file remains as reserved forever >> >> And unfortunately we are relying on the timeout feature of AutoFS so >> unmount happens frequently. This is the reason we have so many cache >> files (created then deleted during one mount session) remaining as >> reserved files forever. >> >> Our AutoFS mount option is mentioned in the other thread >> > (http://sourceforge.net/mailarchive/forum.php?thread_name=4D68E18F.3000708%4 > 0gmail.com&forum_name=moosefs-users). >> And the issue looks like the same: >> >> * If the FS is mounted by fstab, writing a file (> 1M) to some folder >> then unmount it, works O.K. >> * If the FS is mounted by autofs, writing a file (> 1M) to some folder >> then unmount it, core file gets dumped >> >> Increasing autofs timeout would help for the reserved file issue but >> not for the core dump issue. And even if the timeout is increased, a >> restart / power off of the system still won't leave enough time for >> the file to be totally deleted. >> >> I do have a work around to fix the 2 issues temporarily, by mounting >> the entire MFS system with fstab and use "-fstype=bind" option in >> autofs to make auto mounting/unmounting happen. But this is >> complicated and different mfs mount options can't be set with >> different subfolders. So I do hope I can have native AutoFS supported >> MooseFS mounts. >> >> Can any one help on this? >> >> Many Thanks >> Flow >> >> On 02/22/2011 12:03 AM, Flow Jiang wrote: >>> Hi, >>> >>> Just found another issue. I cleared about 10000 reserved files with >>> the script provided at >>> > http://sourceforge.net/tracker/?func=detail&aid=3104619&group_id=228631&atid > =1075722 >>> yesterday, and this morning I had 0 reserved file when started working. >>> >>> However, after one day development activity with 6 workstations, now >>> we have 204 reserved files not deleted. I've noticed it's stated that >>> "Each session after two hours is automatically closed and all the >>> files are released." in above link, but seems it's not happening in >>> our environment. We have CentOS 5.5 x86 servers and run mfsmount on >>> Fedora 12 x64 workstations. Both servers and workstations run mfs >>> 1.6.19. And mfs is serving as home with read / write access. >>> >>> Here are some example of the reserved files by reading the metadata: >>> >>> > 00067856|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|inde > x.sqlite-journal >>> > 00067857|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|inde > x.sqlite-journal >>> >>> Most of the 204 reserved files look like temp / journal files. >>> >>> Any ideas about the cause of the issue? >>> >>> BTW, OpenOffice fails to start if MFS serves as home directory. It >>> should be a FS bug as stated on: >>> http://qa.openoffice.org/issues/show_bug.cgi?id=113207. Would it be >>> related to the issue above? And can we fix this OOo issue? >>> >>> Many Thanks >>> Flow >>> >>> >>> >>> >>> >>> >>> > ---------------------------------------------------------------------------- > -- > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT > data > generated by your applications, servers and devices whether physical, > virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: TianYuchuan(田玉川) <ti...@fo...> - 2011-03-17 09:55:51
|
Hello My moosefs system was accessd very slowly,I nave no idea,please help me!Thanks!!! files number 104964618 ,chunks number 104963962。 master load is not high,but When the hour every to data cannot accessed,continued for several minutes。General,visit concurrent small, to access data delay was needed a few seconds。 My moosefs system have nine chunks, The chunk station 1 localhost 192.168.0.118 9422 1.6.19 23387618 3.6 TiB 4.5 TiB 79.72 0 0 B 0 B - 2 localhost 192.168.0.119 9422 1.6.19 23246974 3.6 TiB 4.5 TiB 79.72 0 0 B 0 B - 3 localhost 192.168.0.120 9422 1.6.19 23360333 3.6 TiB 4.5 TiB 79.72 0 0 B 0 B - 4 localhost 192.168.0.121 9422 1.6.19 23192013 3.6 TiB 4.5 TiB 79.69 0 0 B 0 B - 5 localhost 192.168.0.122 9422 1.6.19 23483418 3.6 TiB 4.5 TiB 79.70 0 0 B 0 B - 6 localhost 192.168.0.123 9422 1.6.19 23308366 3.6 TiB 4.5 TiB 79.70 0 0 B 0 B - 7 localhost 192.168.0.124 9422 1.6.19 23361992 3.6 TiB 4.5 TiB 79.69 0 0 B 0 B - 8 localhost 192.168.0.125 9422 1.6.19 23300478 3.6 TiB 4.5 TiB 79.70 0 0 B 0 B - 9 localhost 192.168.0.127 9422 1.6.19 23284897 3.5 TiB 4.5 TiB 78.72 0 0 B 0 B - -------------------------------------------------------------------------------------------------------------------------------------------------- [root@localhost mfs]# free -m total used free shared buffers cached Mem: 48295 46127 2168 0 38 8204 -/+ buffers/cache: 37884 10411 Swap: 0 0 0 The CPU using 95%,the highest was by 150%。 -----邮件原件----- 发件人: Shen Guowen [mailto:sh...@ui...] 发送时间: 2010年8月9日 10:42 收件人: TianYuchuan(田玉川) 抄送: moo...@li... 主题: Re: [Moosefs-users] mfs-master[10546]: CS(192.168.0.125) packet too long (115289537/50000000) Don't worry! This is because some of your chunk servers are currently unreachable, and the master server notices it, then modifies the meta data of files in those chunk servers to set the "allvalidcopies" to 0 in "struct chunk". When the master is rescanning the files (fs_test_files() in filesystem.c), it finds out the valid copy is 0, then print information into syslog file, just as listed below. However, printing process is quite time-consuming, especially the mount of files is large. During this period, the master ignores the chunk server's connection (because it is in a big loop of test files, and it is a single thread to do this, maybe this is a pitfall). So although you make sure the chunk server working correctly, it is useless (you can notice the reconnecting information in chunk server's syslog file). You could let the master finish printing, then it will reconnect with chunk servers, and will notice the files is there, then set the "allvalidcopies" to a correct value. Then works normally. Or you can re-compile the program with commenting the line 5512 and line 5482 in filesystem.c(mfs-1.6.15). It will ignore the print messages and of cause, reduce the fs test time. Below is from Michal: ----------------------------------------------------------------------- We give you here some quick patches you can implement to the master server to improve its performance for that amount of files: In matocsserv.c in mfsmaster you need to change this line: #define MaxPacketSize 50000000 into this: #define MaxPacketSize 500000000 Also we suggest a change in filesystem.c in mfsmaster in "fs_test_files" function. Change this line: if ((uint32_t)(main_time())<=starttime+150) { into: if ((uint32_t)(main_time())<=starttime+900) { And also changing this line: for (k=0 ; k<(NODEHASHSIZE/3600) && i<NODEHASHSIZE ; k++,i++) { into this: for (k=0 ; k<(NODEHASHSIZE/14400) && i<NODEHASHSIZE ; k++,i++) { You need to recompile the master server and start it again. The above changes should make the master server work more stable with large amount of files. Another suggestion would be to create two MooseFS instances (eg. 2 x 200 million files). One master server could also be metalogger for the another system and vice versa. Kind regards Michał ----------------------------------------------------------------------------- -- Guowen Shen On Sun, 2010-08-08 at 22:51 +0800, TianYuchuan(田玉川) wrote: > > > hello,everyone! > I have a big quertion,please help me,thank you very much. > We intend to use moosefs at our product environment as the storage of > our online photo service. > We'll store for about 200 million photo files. > I've built one master server(48G mem), one metalogger server, eight > chunk servers(8*1T SATA). When I copy photo files to the moosefs > system. At start everything is good. But I had copyed files 57 > million ,the master machines'CPU were used 100% > I sthoped the master when used “/user/local/mfs/sbin/mfsmasterserver > -s”,that I started the master。but there was a big problem ,the > master had not read my files。 These documents are important to me,I > am very anxious,please help me recover these files,tihanks。 > > I got many error syslog from master server: > > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 41991323: 2668/2526212449954462668/176s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 00000000043CD358 (inode: 50379931 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 50379931: 2926/4294909215566102926/163b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 00000000002966C3 (inode: 48284 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 48284: bookdata/178/8533354296639220178/180b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000594726 (inode: 4242588 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 4242588: bookdata/6631/4300989258725036631/85s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000993541 (inode: 8436892 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 8436892: bookdata/7534/3147352338521267534/122b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000000D906E6 (inode: 12631196 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 12631196: bookdata/8691/11879047433161548691/164s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 000000000118DC1E (inode: 16825500 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 16825500: bookdata/1232/17850056326363351232/166b.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001681BC7 (inode: 21019804 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 21019804: bookdata/26/12779298489336140026/246s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001A804E1 (inode: 25214108 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 25214108: bookdata/3886/8729781571075193886/30s.jpg > Aug 6 00:57:01 localhost mfsmaster[10546]: currently unavailable > chunk 0000000001E7E826 (inode: 29408412 ; index: 0) > Aug 6 00:57:01 localhost mfsmaster[10546]: * currently unavailable > file 29408412: bookdata/4757/142868991575144757/316b.jpg > > > Aug 7 23:56:36 localhost mfsmaster[10546]: CS(192.168.0.124) packet > too long (115289537/50000000) > Aug 7 23:56:36 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.124, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > Aug 8 00:08:14 localhost mfsmaster[10546]: CS(192.168.0.127) packet > too long (104113889/50000000) > Aug 8 00:08:14 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.127, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > Aug 8 00:21:03 localhost mfsmaster[10546]: CS(192.168.0.120) packet > too long (117046565/50000000) > Aug 8 00:21:03 localhost mfsmaster[10546]: chunkserver disconnected - > ip: 192.168.0.120, port: 0, usedspace: 0 (0.00 GiB), totalspace: 0 > (0.00 GiB) > > when I visited the mfscgi,the error was“Can't connect to MFS master > (IP:127.0.0.1 ; PORT:9421)” > 。 > > Thanks all! > ------------------------------------------------------------------------------ > This SF.net email is sponsored by > > Make an app they can't live without > Enter the BlackBerry Developer Challenge > http://p.sf.net/sfu/RIM-dev2dev > _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Pedro N. <pe...@st...> - 2011-03-16 21:45:36
|
Hi there, I have a deployment of moose fs where the metadata.mfs file was corrupted and became 0 bytes.I did not have any meta-loggers installed and now there is no way that I can get my data back using the mfsmetarestore tool.However I see that all the data is still there in the HDs... Is there any way to rescan the HDs and reconstruct he metadata.mfs file without loosing all the data already store there? Sincerely, Pedro Naranjo / STL Technologies / Solutions Architect / 888.556.0774 |
From: Pedro N. <pe...@st...> - 2011-03-16 21:04:18
|
Hi there, <mailto:moo...@li...> I have a deployment of moose fs where the metadata.mfs file was corrupted and became 0 bytes.I did not have any meta-loggers installed and now there is no way that I can get my data back using the mfsmetarestore tool.However I see that all the data is still there in the HDs... Is there any way to rescan the HDs and reconstruct he metadata.mfs file without loosing all the data already store there? Sincerely, Pedro Naranjo / STL Technologies / Solutions Architect / 888.556.0774 |
From: Robert D. <ro...@in...> - 2011-03-16 17:49:31
|
Thanks, I will update with my results of your attachment. I've had no success with fuse on Nexenta, but I can continue to research the OpenSolaris fuse builds and update as I find anything worthwhile. -Rob _____ From: Michal Borychowski [mailto:mic...@ge...] Sent: Wednesday, March 16, 2011 6:39 AM To: 'Robert Dye' Cc: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Hi! We checked more carefully this situation. Using not thread-safe "readdir" function everything went smoothly (instead of "readdir_r"). The structure keeping entries in directories looks on Nexenta like this: typedef struct dirent { ino_t d_ino; /* "inode number" of entry */ off_t d_off; /* offset of disk directory entry */ unsigned short d_reclen; /* length of this record */ char d_name[1]; /* name of file */ } dirent_t; We can read from this that there is just one byte reserved for the file name whereas in other operating systems it is typically 256 bytes. We made appropriate changes to our code so that it is now compatible with Nexenta. Please find the modified file in the attachement. (And we wait for the tips on how to install fusefs on Nexenta if you managed to install it) Kind regards Michal From: Robert Dye [mailto:ro...@in...] Sent: Tuesday, March 15, 2011 12:31 AM To: 'Michal Borychowski' Cc: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Thanks for the info, looks like I had to submit a bug after all: Details: PROBLEM: When the mfschunkserver has chunks and a restart occurs, mfschunkserver will segfault REPRODUCE: OS: SunOS nexgfs 5.11 NexentaOS_134f i86pc i386 i86pc Solaris COMPILE: Add -D_POSIX_PTHREAD_SEMANTICS to mfschunkserver/MakeFile CFLAGS variable. RUN: Start the mfschunkserver and allow a few chunks to copy. Stop the process, and restart. OUTPUT: (gdb) run -d Starting program: /usr/local/sbin/mfschunkserver -d [New LWP 1] [New LWP 2] [LWP 2 exited] [New LWP 2] warning: Lowest section in /lib/libpthread.so.1 is .dynamic at 00000074 working directory: /usr/local/var/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /moose/ ... hdd space manager: scanning... 99% Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 2] 0x30303030 in ?? () _____ From: Michal Borychowski [mailto:mic...@ge...] Sent: Tuesday, March 08, 2011 2:56 AM To: 'Robert Dye' Cc: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Hi Robert! We'll give it a closer look, for the moment you can try to add in Makefile compilator option: "-D_POSIX_PTHREAD_SEMANTICS" It helps in Solaris 10, and should help in Nexenta too. Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 7:58 PM To: moo...@li... Subject: [Moosefs-users] Compile on Nexenta Fails Hello, When compiling moosefs on NexentaStor 3.0.4(Community), I use the following configure options: ./configure --disable-mfsmaster --disable-mfscgi --disable-mfscgiserv --disable-mfsmount After configuring, make returns the following output: Making all in mfschunkserver make[2]: Entering directory `/root/mfs-1.6.20-2/mfschunkserver' gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-bgjobs.o -MD -MP -MF .deps/mfschunkserver-bgjobs.Tpo -c -o mfschunkserver-bgjobs.o `test -f 'bgjobs.c' || echo './'`bgjobs.c mv -f .deps/mfschunkserver-bgjobs.Tpo .deps/mfschunkserver-bgjobs.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-csserv.o -MD -MP -MF .deps/mfschunkserver-csserv.Tpo -c -o mfschunkserver-csserv.o `test -f 'csserv.c' || echo './'`csserv.c mv -f .deps/mfschunkserver-csserv.Tpo .deps/mfschunkserver-csserv.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-hddspacemgr.o -MD -MP -MF .deps/mfschunkserver-hddspacemgr.Tpo -c -o mfschunkserver-hddspacemgr.o `test -f 'hddspacemgr.c' || echo './'`hddspacemgr.c hddspacemgr.c: In function 'hdd_chunk_remove': hddspacemgr.c:667: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:675: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_chunk_get': hddspacemgr.c:802: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:810: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_check_folders': hddspacemgr.c:1070: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:1078: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_emptycrc': hddspacemgr.c:1290: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'chunk_readcrc': hddspacemgr.c:1328: warning: pointer targets in assignment differ in signedness hddspacemgr.c:1343: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_freecrc': hddspacemgr.c:1355: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_delayed_ops': hddspacemgr.c:1505: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_io_begin': hddspacemgr.c:1608: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'hdd_term': hddspacemgr.c:3669: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:3677: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness mv -f .deps/mfschunkserver-hddspacemgr.Tpo .deps/mfschunkserver-hddspacemgr.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-masterconn.o -MD -MP -MF .deps/mfschunkserver-masterconn.Tpo -c -o mfschunkserver-masterconn.o `test -f 'masterconn.c' || echo './'`masterconn.c mv -f .deps/mfschunkserver-masterconn.Tpo .deps/mfschunkserver-masterconn.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-replicator.o -MD -MP -MF .deps/mfschunkserver-replicator.Tpo -c -o mfschunkserver-replicator.o `test -f 'replicator.c' || echo './'`replicator.c mv -f .deps/mfschunkserver-replicator.Tpo .deps/mfschunkserver-replicator.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-chartsdata.o -MD -MP -MF .deps/mfschunkserver-chartsdata.Tpo -c -o mfschunkserver-chartsdata.o `test -f 'chartsdata.c' || echo './'`chartsdata.c mv -f .deps/mfschunkserver-chartsdata.Tpo .deps/mfschunkserver-chartsdata.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-main.o -MD -MP -MF .deps/mfschunkserver-main.Tpo -c -o mfschunkserver-main.o `test -f '../mfscommon/main.c' || echo './'`../mfscommon/main.c ../mfscommon/main.c: In function 'changeugid': ../mfscommon/main.c:548: error: too many arguments to function 'getgrnam_r' ../mfscommon/main.c:561: error: too many arguments to function 'getpwuid_r' ../mfscommon/main.c:569: error: too many arguments to function 'getpwnam_r' make[2]: *** [mfschunkserver-main.o] Error 1 make[2]: Leaving directory `/root/mfs-1.6.20-2/mfschunkserver' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/mfs-1.6.20-2' make: *** [all] Error 2 Ideas/Suggestions/Work-Around? |
From: Michal B. <mic...@ge...> - 2011-03-16 14:30:05
|
Hi! We recently found a serious bug in MooseFS - when using "mfsmakesnapshot" it appears that "mfsmetarestore" fails to properly restore the metadata. For the moment we present an easy method to avoid this error. While restoring the metadata in the first step you need to pass "metadata.mfs.back" in "mfsmetarestore" - without the changelogs. In the second step you pass to the "mfsmetarestore" the newly created file along with the changelogs. So instead of: "mfsmetarestore -a -d /usr/local/var/mfs" you need to do the following steps: 1. "mfsmetarestore -m /usr/local/var/mfs/metadata.mfs.back -o /usr/local/var/mfs/metadata.mfs.backback" 2. "mv /usr/local/var/mfs/metadata.mfs.backback /usr/local/var/mfs/metadata.mfs.back" 3. "mfsmetarestore -a -d /usr/local/var/mfs" In case of restoring the metada from metalogger you need to use "metadata_ml.mfs.back" instead of "metadata.mfs.back". This error is present in all versions of MooseFS (exactly from the moment when SNAPSHOT operation has been introduced). So, if you have "SNAPSHOT" in your changelogs you need to use the above steps when restoring the metadata. The bug will be fixed in the next release. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 |
From: Michal B. <mic...@ge...> - 2011-03-16 14:03:12
|
Hi! We added a small change in 1.6.21, ie. while umounting, mfsmount will now send a "close session" command to the master. Such properly closed sessions would not be "hanging" throughout this 7-day period. Kind regards Michal -----Original Message----- From: Flow Jiang [mailto:fl...@gm...] Sent: Tuesday, March 01, 2011 5:06 PM To: moo...@li... Subject: Re: [Moosefs-users] Reserved files remain forever O.K. I think I lied in the last post. I changed all our configuration to use fstab mount the FS permanently and use "-fstype=bind" for AutoFS to link it to user home. But I still have more than 100 reserved files today, most of them look like temp files. (like .local|share|gvfs-metadata|home-8a63e833.log) The interesting thing is that we have nightly builds running on non-UserHome MooseFS mounted directory, which also creates / deletes temp files, but non of them gets reserved. So I suspect this might be caused by the way applications write file. So my question is that does any one has the similar issue when MooseFS is serving as home? Do you have reserved files remaining for a long time? And they will be deleted after 1 week, right? BTW, the "-fstype=bind" work around do fixes the Core Dump issue. Thanks Flow On 02/27/2011 01:20 AM, Flow Jiang wrote: > Hi, > > I now know how to re-create the issue and it should also be related to > AutoFS. > > I noticed that if I make a new file with non-zero size and delete it > immediately, it will be set as reserved file and keep at this state > for a while (about 1 minute in our environment). If the underlying > MooseFS is unmounted within this short period, the behavior differs > between fstab mounted FS and AufoFS mounted FS. > > * If the FS is mounted by fstab, create and delete a file, then > unmount the FS, the file eventually gets deleted > * If the FS is mounted by autofs, create and delete a file, then > unmount the FS, the file remains as reserved forever > > And unfortunately we are relying on the timeout feature of AutoFS so > unmount happens frequently. This is the reason we have so many cache > files (created then deleted during one mount session) remaining as > reserved files forever. > > Our AutoFS mount option is mentioned in the other thread > (http://sourceforge.net/mailarchive/forum.php?thread_name=4D68E18F.3000708%4 0gmail.com&forum_name=moosefs-users). > And the issue looks like the same: > > * If the FS is mounted by fstab, writing a file (> 1M) to some folder > then unmount it, works O.K. > * If the FS is mounted by autofs, writing a file (> 1M) to some folder > then unmount it, core file gets dumped > > Increasing autofs timeout would help for the reserved file issue but > not for the core dump issue. And even if the timeout is increased, a > restart / power off of the system still won't leave enough time for > the file to be totally deleted. > > I do have a work around to fix the 2 issues temporarily, by mounting > the entire MFS system with fstab and use "-fstype=bind" option in > autofs to make auto mounting/unmounting happen. But this is > complicated and different mfs mount options can't be set with > different subfolders. So I do hope I can have native AutoFS supported > MooseFS mounts. > > Can any one help on this? > > Many Thanks > Flow > > On 02/22/2011 12:03 AM, Flow Jiang wrote: >> Hi, >> >> Just found another issue. I cleared about 10000 reserved files with >> the script provided at >> http://sourceforge.net/tracker/?func=detail&aid=3104619&group_id=228631&atid =1075722 >> yesterday, and this morning I had 0 reserved file when started working. >> >> However, after one day development activity with 6 workstations, now >> we have 204 reserved files not deleted. I've noticed it's stated that >> "Each session after two hours is automatically closed and all the >> files are released." in above link, but seems it's not happening in >> our environment. We have CentOS 5.5 x86 servers and run mfsmount on >> Fedora 12 x64 workstations. Both servers and workstations run mfs >> 1.6.19. And mfs is serving as home with read / write access. >> >> Here are some example of the reserved files by reading the metadata: >> >> 00067856|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|inde x.sqlite-journal >> >> 00067857|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|inde x.sqlite-journal >> >> >> Most of the 204 reserved files look like temp / journal files. >> >> Any ideas about the cause of the issue? >> >> BTW, OpenOffice fails to start if MFS serves as home directory. It >> should be a FS bug as stated on: >> http://qa.openoffice.org/issues/show_bug.cgi?id=113207. Would it be >> related to the issue above? And can we fix this OOo issue? >> >> Many Thanks >> Flow >> >> >> >> >> >> >> ---------------------------------------------------------------------------- -- Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 姜智华 <fl...@gm...> - 2011-03-16 03:31:58
|
Hi, I tried the patch but the core file still gets dumped (with mfs 1.6.20) Core was generated by `mfsmount /home/fwjiang -o rw,mfscachefiles,mfsentrycacheto=30,mfsattrcacheto=30'. Program terminated with signal 6, Aborted. #0 0x00000031c16327f5 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install filesystem-2.4.30-2.fc12.x86_64 fuse-libs-2.8.5-2.fc12.x86_64 glibc-2.11.2-3.x86_64 libgcc-4.4.4-10.fc12.x86_64 (gdb) bt #0 0x00000031c16327f5 in raise () from /lib64/libc.so.6 #1 0x00000031c1633fd5 in abort () from /lib64/libc.so.6 #2 0x00000031c166fa1b in __libc_message () from /lib64/libc.so.6 #3 0x00000031c1675336 in malloc_printerr () from /lib64/libc.so.6 #4 0x000000000040e4ad in read_data_term () at readdata.c:224 #5 0x00000000004131b5 in mainloop (args=0x7fffc23bf030, mp=0xacc290 "/home/fwjiang", mt=1, fg=0) at main.c:600 #6 0x00000000004134f8 in main (argc=<value optimized out>, argv=0x7fffc23bf158) at main.c:819 Any clues? Thanks Flow On 3/15/11, Michal Borychowski <mic...@ge...> wrote: > It'll be fixed in the next release. For the moment you may try this "patch": > > > @@ -178,6 +178,7 @@ void read_data_end(void* rr) { > } > if (rrec->rbuff!=NULL) { > free(rrec->rbuff); > + rrec->rbuff=NULL; > } > > pthread_mutex_lock(&glock); > > > Kind regards > Michal > > -----Original Message----- > From: Flow Jiang [mailto:fl...@gm...] > Sent: Friday, March 04, 2011 4:21 PM > To: Michal Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] Core Dumped from mfsmount with Autofs > > I tried to re-compile mfsmount with the "free(freecblockshead)" line > commented out. Now our servers (which keep running 7x24) are happy, no > more core files. However, core files still gets generated on our > workstations when they reboot. The core is generated from the > "read_data_term" line right after the "write_data_term" line mentioned > previously. > > Hopefully this will also get fixed in next release, and will even be > better if I can have a quick solution / patch for the issue. > > Thanks > Flow > > On 03/01/2011 11:37 PM, Flow Jiang wrote: >> Michal, >> >> Glad to know that this error could be simply solved by commenting out >> that line and will try tomorrow to see if it fixes this issue. >> >> It does annoying since each core file takes about 170M and I tried to >> disable the core dump but failed. So hopefully we can have a better >> solution in the next release. >> >> Thanks >> Flow >> >> On 03/01/2011 09:00 PM, Michal Borychowski wrote: >>> Hi! >>> >>> This error is not a serious one. It may happen only upon exits. If these >>> errors are annoying a quick solution is to comment out the >>> "free(freecblockshead)" line, recompile mfsmount and run again. We'll >>> prepare a better solution in the next release. >>> >>> >>> Kind regards >>> Michał > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: R.C. <mil...@gm...> - 2011-03-15 15:12:22
|
As stated here: http://www.moosefs.org/news-reader/items/MFS_running_on_Mac_OS_X..html MooseFS native client can be mounted on OS X. Is there anyone having experience in this field that can contribute some hints for this task? Thank you Bye Raf |
From: Stéphane B. <ste...@ga...> - 2011-03-15 14:24:31
|
Thanks for the answer, But it happens really often. Maybe every 2 or 3 loops. Maybe it is some settings wrong ? We have 2 chunkserver ... that means that both of them was unavailable. I changed the timeout settings a little. Here is my settings. Chunkservers: MASTER_TIMEOUT = 2 HDD_TEST_FREQ = 10 This is the only 2 settings I changed Masters: CHUNKS_LOOP_TIME = 60 CHUNKS_DEL_LIMIT = 5000 CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 10 This the only settings I changed the remaining should be Default More explanation about the setup... We have 2 master running Keepalived (same as carp) the chunk servers are on the same servers as the master the machines are Dual quad core Xeons with 8G of Ram and 15k rpm disks in raid 5 On 11-03-15 08:50 AM, Michal Borychowski wrote: > Hi! > > These are logs from a test loop. It may happen that while the loop is > running some chunkservers are unavailable. The next loop should show that > everything is allright. > > Unless you have numbers in red in the first column in CGI monitor, > everything is fine. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > -----Original Message----- > From: Stéphane Boisvert [mailto:ste...@ga...] > Sent: Thursday, March 10, 2011 8:40 PM > To: moosefs-users > Subject: [Moosefs-users] Unavailable Chunks > > Hi, > We have moosefs running 2 months now. And today we got few errors > on it. The master has done the: structure check loop and it ended with > 72 unavailable chunks and files. I don't know what this really means. > The files are still accessible and mfscheckfile output is good too. We > have about 2 millions files on the mooseFS currently and 2 chunkserver > with a redundant setup. On this cluster we run the version: 1.6.17 we > plan to upgrade it soon. > > > Thanks, > > Stephane > > > > > ---------------------------------------------------------------------------- > -- > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- *Stephane Boisvert* Unix Administrator Msn : ste...@ga... E-mail :ste...@ga... |
From: Michal B. <mic...@ge...> - 2011-03-15 13:05:36
|
It'll be fixed in the next release. For the moment you may try this "patch": @@ -178,6 +178,7 @@ void read_data_end(void* rr) { } if (rrec->rbuff!=NULL) { free(rrec->rbuff); + rrec->rbuff=NULL; } pthread_mutex_lock(&glock); Kind regards Michal -----Original Message----- From: Flow Jiang [mailto:fl...@gm...] Sent: Friday, March 04, 2011 4:21 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Core Dumped from mfsmount with Autofs I tried to re-compile mfsmount with the "free(freecblockshead)" line commented out. Now our servers (which keep running 7x24) are happy, no more core files. However, core files still gets generated on our workstations when they reboot. The core is generated from the "read_data_term" line right after the "write_data_term" line mentioned previously. Hopefully this will also get fixed in next release, and will even be better if I can have a quick solution / patch for the issue. Thanks Flow On 03/01/2011 11:37 PM, Flow Jiang wrote: > Michal, > > Glad to know that this error could be simply solved by commenting out > that line and will try tomorrow to see if it fixes this issue. > > It does annoying since each core file takes about 170M and I tried to > disable the core dump but failed. So hopefully we can have a better > solution in the next release. > > Thanks > Flow > > On 03/01/2011 09:00 PM, Michal Borychowski wrote: >> Hi! >> >> This error is not a serious one. It may happen only upon exits. If these >> errors are annoying a quick solution is to comment out the >> "free(freecblockshead)" line, recompile mfsmount and run again. We'll >> prepare a better solution in the next release. >> >> >> Kind regards >> Michał ------------------------------------------------------------------------------ What You Don't Know About Data Connectivity CAN Hurt You This paper provides an overview of data connectivity, details its effect on application quality, and explores various alternative solutions. http://p.sf.net/sfu/progress-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-03-15 12:54:19
|
Hi Robert! These are kernel messages and none of them (from [11018.006906] to [11018.069913]) is connected with MooseFS. Maybe you have some hardware problem? Kind regards Michal From: Robert Dye [mailto:ro...@in...] Sent: Thursday, March 10, 2011 3:20 AM To: moo...@li... Subject: [Moosefs-users] mooseFS locking the kernel up? Hello, I have a few servers which are running mooseFS great, however, there is one that is continually having issues when running mfschunkserver. The process itself will run for approximately three hours, then drop connection to the master. When it drops connection the master, I can stop the process, start the process again, but when I restart the mfschunkserver process the system becomes unstable and hard-locks. The output I receive is the following from messages: (the last few lines are irrelevant, it just shows I was able to plug a usb keyboard in the system, tried restarting the mfschunkserver, then had to reboot) Mar 9 17:54:50 brickc kernel: [11017.971891] Pid: 1549, comm: mfschunkserver Tainted: G D 2.6.35-22-server #33-Ubuntu DH55HC/ Mar 9 17:54:50 brickc kernel: [11017.973678] RIP: 0010:[<ffffffff8110d241>] [<ffffffff8110d241>] page_evictable+0x21/0x80 Mar 9 17:54:50 brickc kernel: [11017.975470] RSP: 0018:ffff880378931438 EFLAGS: 00010286 Mar 9 17:54:50 brickc kernel: [11017.977256] RAX: fff688000668a848 RBX: ffffea0009ea1b88 RCX: ffffffffffffffd0 Mar 9 17:54:50 brickc kernel: [11017.979067] RDX: 020000000000080d RSI: 0000000000000000 RDI: ffffea0009ea1b88 Mar 9 17:54:50 brickc kernel: [11017.980849] RBP: ffff880378931438 R08: dead000000200200 R09: dead000000100100 Mar 9 17:54:50 brickc kernel: [11017.982637] R10: ffff8801000013d0 R11: 0000000000000000 R12: ffffea0009ea1bb0 Mar 9 17:54:50 brickc kernel: [11017.984417] R13: ffff880378931878 R14: ffff8803789316a8 R15: ffff8801000012d8 Mar 9 17:54:50 brickc kernel: [11017.986230] FS: 00007f7178d99700(0000) GS:ffff880001e20000(0000) knlGS:0000000000000000 Mar 9 17:54:50 brickc kernel: [11017.988035] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b Mar 9 17:54:50 brickc kernel: [11017.989887] CR2: 00007f7178123000 CR3: 0000000419f6d000 CR4: 00000000000006e0 Mar 9 17:54:50 brickc kernel: [11017.991715] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 9 17:54:50 brickc kernel: [11017.993577] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 9 17:54:50 brickc kernel: [11017.995448] Process mfschunkserver (pid: 1549, threadinfo ffff880378930000, task ffff8804169f16e0) Mar 9 17:54:50 brickc kernel: [11017.999197] ffff880378931548 ffffffff8110ed70 0000000000000000 ffff8803789314f8 Mar 9 17:54:50 brickc kernel: [11017.999229] <0> 0000000000000000 ffff88000668a848 0000000000000000 0000000000000000 Mar 9 17:54:50 brickc kernel: [11018.001143] <0> 0000000000000000 0000000000000001 ffff8803789314a8 ffffffff81149b01 Mar 9 17:54:50 brickc kernel: [11018.006906] [<ffffffff8110ed70>] shrink_page_list+0x100/0x580 Mar 9 17:54:50 brickc kernel: [11018.008839] [<ffffffff81149b01>] ? mem_cgroup_del_lru_list+0x21/0xa0 Mar 9 17:54:50 brickc kernel: [11018.010790] [<ffffffff81149c09>] ? mem_cgroup_del_lru+0x39/0x40 Mar 9 17:54:50 brickc kernel: [11018.012724] [<ffffffff8110d98b>] ? isolate_lru_pages+0xdb/0x260 Mar 9 17:54:50 brickc kernel: [11018.014637] [<ffffffff8110f4bd>] shrink_inactive_list+0x2cd/0x7f0 Mar 9 17:54:50 brickc kernel: [11018.016550] [<ffffffff81107337>] ? __alloc_pages_slowpath+0x1a7/0x590 Mar 9 17:54:50 brickc kernel: [11018.018462] [<ffffffff8110d772>] ? get_scan_count+0x172/0x2b0 Mar 9 17:54:50 brickc kernel: [11018.020328] [<ffffffff8110fb8b>] shrink_zone+0x1ab/0x230 Mar 9 17:54:50 brickc kernel: [11018.022159] [<ffffffff8110fc93>] shrink_zones+0x83/0x130 Mar 9 17:54:50 brickc kernel: [11018.023986] [<ffffffff8110fdde>] do_try_to_free_pages+0x9e/0x360 Mar 9 17:54:50 brickc kernel: [11018.025811] [<ffffffff8111024b>] try_to_free_pages+0x6b/0x70 Mar 9 17:54:50 brickc kernel: [11018.027612] [<ffffffff8110740a>] __alloc_pages_slowpath+0x27a/0x590 Mar 9 17:54:50 brickc kernel: [11018.029414] [<ffffffff8122b77a>] ? __jbd2_log_space_left+0x1a/0x40 Mar 9 17:54:50 brickc kernel: [11018.031220] [<ffffffff81107884>] __alloc_pages_nodemask+0x164/0x1d0 Mar 9 17:54:50 brickc kernel: [11018.033020] [<ffffffff811397ba>] alloc_pages_current+0x9a/0x100 Mar 9 17:54:50 brickc kernel: [11018.034801] [<ffffffff81100da7>] __page_cache_alloc+0x87/0x90 Mar 9 17:54:50 brickc kernel: [11018.036600] [<ffffffff8110215c>] grab_cache_page_write_begin+0x7c/0xc0 Mar 9 17:54:50 brickc kernel: [11018.038474] [<ffffffff811f1964>] ext4_da_write_begin+0x144/0x290 Mar 9 17:54:50 brickc kernel: [11018.040825] [<ffffffff811f21ed>] ? ext4_da_write_end+0xfd/0x2e0 Mar 9 17:54:50 brickc kernel: [11018.043165] [<ffffffff8104ee12>] ? enqueue_entity+0x132/0x1b0 Mar 9 17:54:50 brickc kernel: [11018.045448] [<ffffffff810ffb36>] ? iov_iter_copy_from_user_atomic+0x96/0x170 Mar 9 17:54:50 brickc kernel: [11018.047769] [<ffffffff810ffe62>] generic_perform_write+0xc2/0x1d0 Mar 9 17:54:50 brickc kernel: [11018.049582] [<ffffffff810fffd4>] generic_file_buffered_write+0x64/0xa0 Mar 9 17:54:50 brickc kernel: [11018.051321] [<ffffffff811028e0>] __generic_file_aio_write+0x240/0x470 Mar 9 17:54:50 brickc kernel: [11018.053052] [<ffffffff810901ed>] ? futex_wait_queue_me+0xcd/0x110 Mar 9 17:54:50 brickc kernel: [11018.054751] [<ffffffff81102b75>] generic_file_aio_write+0x65/0xd0 Mar 9 17:54:50 brickc kernel: [11018.056442] [<ffffffff811e77a9>] ext4_file_write+0x39/0xb0 Mar 9 17:54:50 brickc kernel: [11018.058204] [<ffffffff81152bea>] do_sync_write+0xda/0x120 Mar 9 17:54:50 brickc kernel: [11018.059901] [<ffffffff8159e76e>] ? _raw_spin_lock+0xe/0x20 Mar 9 17:54:50 brickc kernel: [11018.061568] [<ffffffff81090a62>] ? futex_wake+0x112/0x130 Mar 9 17:54:50 brickc kernel: [11018.063232] [<ffffffff8128f208>] ? apparmor_file_permission+0x18/0x20 Mar 9 17:54:50 brickc kernel: [11018.064900] [<ffffffff8125e7a6>] ? security_file_permission+0x16/0x20 Mar 9 17:54:50 brickc kernel: [11018.066565] [<ffffffff81152ec8>] vfs_write+0xb8/0x1a0 Mar 9 17:54:50 brickc kernel: [11018.068234] [<ffffffff81153862>] sys_pwrite64+0x82/0xa0 Mar 9 17:54:50 brickc kernel: [11018.069913] [<ffffffff8100a0f2>] system_call_fastpath+0x16/0x1b Mar 9 17:54:50 brickc kernel: [11018.077405] RSP <ffff880378931438> Mar 9 17:54:50 brickc kernel: [11018.079482] ---[ end trace 5c000d67753ebd63 ]--- Mar 9 17:59:04 brickc kernel: [11271.114370] usb 2-1.3: new low speed USB device using ehci_hcd and address 4 Mar 9 17:59:04 brickc kernel: [11271.253770] input: USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input5 Mar 9 17:59:04 brickc kernel: [11271.259128] generic-usb 0003:04D9:1603.0003: input,hidraw0: USB HID v1.10 Keyboard [ USB Keyboard] on usb-0000:00:1d.0-1.3/input0 Mar 9 17:59:04 brickc kernel: [11271.279819] input: USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.1/input/input6 Mar 9 17:59:04 brickc kernel: [11271.284307] generic-usb 0003:04D9:1603.0004: input,hidraw1: USB HID v1.10 Device [ USB Keyboard] on usb-0000:00:1d.0-1.3/input1 Mar 9 18:06:20 brickc kernel: imklog 4.2.0, log source = /proc/kmsg started. Ideas for debugging this issue? |
From: Michal B. <mic...@ge...> - 2011-03-15 12:51:09
|
Hi! These are logs from a test loop. It may happen that while the loop is running some chunkservers are unavailable. The next loop should show that everything is allright. Unless you have numbers in red in the first column in CGI monitor, everything is fine. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Stéphane Boisvert [mailto:ste...@ga...] Sent: Thursday, March 10, 2011 8:40 PM To: moosefs-users Subject: [Moosefs-users] Unavailable Chunks Hi, We have moosefs running 2 months now. And today we got few errors on it. The master has done the: structure check loop and it ended with 72 unavailable chunks and files. I don't know what this really means. The files are still accessible and mfscheckfile output is good too. We have about 2 millions files on the mooseFS currently and 2 chunkserver with a redundant setup. On this cluster we run the version: 1.6.17 we plan to upgrade it soon. Thanks, Stephane ---------------------------------------------------------------------------- -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-03-15 12:43:49
|
Hi! That's strange - we installed MooseFS with success on Nexenta with this flag. We'll look at it again. On the other hand we failed to install fusefs - have you managed to do it? If yes, please share exact instructions on how to install it. Thank you Michał Borychowski From: Robert Dye [mailto:ro...@in...] Sent: Tuesday, March 15, 2011 12:31 AM To: 'Michal Borychowski' Cc: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Thanks for the info, looks like I had to submit a bug after all: Details: PROBLEM: When the mfschunkserver has chunks and a restart occurs, mfschunkserver will segfault REPRODUCE: OS: SunOS nexgfs 5.11 NexentaOS_134f i86pc i386 i86pc Solaris COMPILE: Add -D_POSIX_PTHREAD_SEMANTICS to mfschunkserver/MakeFile CFLAGS variable. RUN: Start the mfschunkserver and allow a few chunks to copy. Stop the process, and restart. OUTPUT: (gdb) run -d Starting program: /usr/local/sbin/mfschunkserver -d [New LWP 1] [New LWP 2] [LWP 2 exited] [New LWP 2] warning: Lowest section in /lib/libpthread.so.1 is .dynamic at 00000074 working directory: /usr/local/var/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /moose/ ... hdd space manager: scanning... 99% Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 2] 0x30303030 in ?? () _____ From: Michal Borychowski [mailto:mic...@ge...] Sent: Tuesday, March 08, 2011 2:56 AM To: 'Robert Dye' Cc: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Hi Robert! We'll give it a closer look, for the moment you can try to add in Makefile compilator option: "-D_POSIX_PTHREAD_SEMANTICS" It helps in Solaris 10, and should help in Nexenta too. Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 7:58 PM To: moo...@li... Subject: [Moosefs-users] Compile on Nexenta Fails Hello, When compiling moosefs on NexentaStor 3.0.4(Community), I use the following configure options: ./configure --disable-mfsmaster --disable-mfscgi --disable-mfscgiserv --disable-mfsmount After configuring, make returns the following output: Making all in mfschunkserver make[2]: Entering directory `/root/mfs-1.6.20-2/mfschunkserver' gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-bgjobs.o -MD -MP -MF .deps/mfschunkserver-bgjobs.Tpo -c -o mfschunkserver-bgjobs.o `test -f 'bgjobs.c' || echo './'`bgjobs.c mv -f .deps/mfschunkserver-bgjobs.Tpo .deps/mfschunkserver-bgjobs.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-csserv.o -MD -MP -MF .deps/mfschunkserver-csserv.Tpo -c -o mfschunkserver-csserv.o `test -f 'csserv.c' || echo './'`csserv.c mv -f .deps/mfschunkserver-csserv.Tpo .deps/mfschunkserver-csserv.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-hddspacemgr.o -MD -MP -MF .deps/mfschunkserver-hddspacemgr.Tpo -c -o mfschunkserver-hddspacemgr.o `test -f 'hddspacemgr.c' || echo './'`hddspacemgr.c hddspacemgr.c: In function 'hdd_chunk_remove': hddspacemgr.c:667: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:675: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_chunk_get': hddspacemgr.c:802: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:810: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_check_folders': hddspacemgr.c:1070: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:1078: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_emptycrc': hddspacemgr.c:1290: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'chunk_readcrc': hddspacemgr.c:1328: warning: pointer targets in assignment differ in signedness hddspacemgr.c:1343: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_freecrc': hddspacemgr.c:1355: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_delayed_ops': hddspacemgr.c:1505: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_io_begin': hddspacemgr.c:1608: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'hdd_term': hddspacemgr.c:3669: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:3677: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness mv -f .deps/mfschunkserver-hddspacemgr.Tpo .deps/mfschunkserver-hddspacemgr.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-masterconn.o -MD -MP -MF .deps/mfschunkserver-masterconn.Tpo -c -o mfschunkserver-masterconn.o `test -f 'masterconn.c' || echo './'`masterconn.c mv -f .deps/mfschunkserver-masterconn.Tpo .deps/mfschunkserver-masterconn.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-replicator.o -MD -MP -MF .deps/mfschunkserver-replicator.Tpo -c -o mfschunkserver-replicator.o `test -f 'replicator.c' || echo './'`replicator.c mv -f .deps/mfschunkserver-replicator.Tpo .deps/mfschunkserver-replicator.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-chartsdata.o -MD -MP -MF .deps/mfschunkserver-chartsdata.Tpo -c -o mfschunkserver-chartsdata.o `test -f 'chartsdata.c' || echo './'`chartsdata.c mv -f .deps/mfschunkserver-chartsdata.Tpo .deps/mfschunkserver-chartsdata.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-main.o -MD -MP -MF .deps/mfschunkserver-main.Tpo -c -o mfschunkserver-main.o `test -f '../mfscommon/main.c' || echo './'`../mfscommon/main.c ../mfscommon/main.c: In function 'changeugid': ../mfscommon/main.c:548: error: too many arguments to function 'getgrnam_r' ../mfscommon/main.c:561: error: too many arguments to function 'getpwuid_r' ../mfscommon/main.c:569: error: too many arguments to function 'getpwnam_r' make[2]: *** [mfschunkserver-main.o] Error 1 make[2]: Leaving directory `/root/mfs-1.6.20-2/mfschunkserver' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/mfs-1.6.20-2' make: *** [all] Error 2 Ideas/Suggestions/Work-Around? |
From: Robert D. <ro...@in...> - 2011-03-14 23:30:46
|
Thanks for the info, looks like I had to submit a bug after all: Details: PROBLEM: When the mfschunkserver has chunks and a restart occurs, mfschunkserver will segfault REPRODUCE: OS: SunOS nexgfs 5.11 NexentaOS_134f i86pc i386 i86pc Solaris COMPILE: Add -D_POSIX_PTHREAD_SEMANTICS to mfschunkserver/MakeFile CFLAGS variable. RUN: Start the mfschunkserver and allow a few chunks to copy. Stop the process, and restart. OUTPUT: (gdb) run -d Starting program: /usr/local/sbin/mfschunkserver -d [New LWP 1] [New LWP 2] [LWP 2 exited] [New LWP 2] warning: Lowest section in /lib/libpthread.so.1 is .dynamic at 00000074 working directory: /usr/local/var/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /moose/ ... hdd space manager: scanning... 99% Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 2] 0x30303030 in ?? () _____ From: Michal Borychowski [mailto:mic...@ge...] Sent: Tuesday, March 08, 2011 2:56 AM To: 'Robert Dye' Cc: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Hi Robert! We'll give it a closer look, for the moment you can try to add in Makefile compilator option: "-D_POSIX_PTHREAD_SEMANTICS" It helps in Solaris 10, and should help in Nexenta too. Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 7:58 PM To: moo...@li... Subject: [Moosefs-users] Compile on Nexenta Fails Hello, When compiling moosefs on NexentaStor 3.0.4(Community), I use the following configure options: ./configure --disable-mfsmaster --disable-mfscgi --disable-mfscgiserv --disable-mfsmount After configuring, make returns the following output: Making all in mfschunkserver make[2]: Entering directory `/root/mfs-1.6.20-2/mfschunkserver' gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-bgjobs.o -MD -MP -MF .deps/mfschunkserver-bgjobs.Tpo -c -o mfschunkserver-bgjobs.o `test -f 'bgjobs.c' || echo './'`bgjobs.c mv -f .deps/mfschunkserver-bgjobs.Tpo .deps/mfschunkserver-bgjobs.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-csserv.o -MD -MP -MF .deps/mfschunkserver-csserv.Tpo -c -o mfschunkserver-csserv.o `test -f 'csserv.c' || echo './'`csserv.c mv -f .deps/mfschunkserver-csserv.Tpo .deps/mfschunkserver-csserv.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-hddspacemgr.o -MD -MP -MF .deps/mfschunkserver-hddspacemgr.Tpo -c -o mfschunkserver-hddspacemgr.o `test -f 'hddspacemgr.c' || echo './'`hddspacemgr.c hddspacemgr.c: In function 'hdd_chunk_remove': hddspacemgr.c:667: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:675: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_chunk_get': hddspacemgr.c:802: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:810: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_check_folders': hddspacemgr.c:1070: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:1078: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_emptycrc': hddspacemgr.c:1290: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'chunk_readcrc': hddspacemgr.c:1328: warning: pointer targets in assignment differ in signedness hddspacemgr.c:1343: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_freecrc': hddspacemgr.c:1355: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_delayed_ops': hddspacemgr.c:1505: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_io_begin': hddspacemgr.c:1608: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'hdd_term': hddspacemgr.c:3669: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:3677: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness mv -f .deps/mfschunkserver-hddspacemgr.Tpo .deps/mfschunkserver-hddspacemgr.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-masterconn.o -MD -MP -MF .deps/mfschunkserver-masterconn.Tpo -c -o mfschunkserver-masterconn.o `test -f 'masterconn.c' || echo './'`masterconn.c mv -f .deps/mfschunkserver-masterconn.Tpo .deps/mfschunkserver-masterconn.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-replicator.o -MD -MP -MF .deps/mfschunkserver-replicator.Tpo -c -o mfschunkserver-replicator.o `test -f 'replicator.c' || echo './'`replicator.c mv -f .deps/mfschunkserver-replicator.Tpo .deps/mfschunkserver-replicator.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-chartsdata.o -MD -MP -MF .deps/mfschunkserver-chartsdata.Tpo -c -o mfschunkserver-chartsdata.o `test -f 'chartsdata.c' || echo './'`chartsdata.c mv -f .deps/mfschunkserver-chartsdata.Tpo .deps/mfschunkserver-chartsdata.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-main.o -MD -MP -MF .deps/mfschunkserver-main.Tpo -c -o mfschunkserver-main.o `test -f '../mfscommon/main.c' || echo './'`../mfscommon/main.c ../mfscommon/main.c: In function 'changeugid': ../mfscommon/main.c:548: error: too many arguments to function 'getgrnam_r' ../mfscommon/main.c:561: error: too many arguments to function 'getpwuid_r' ../mfscommon/main.c:569: error: too many arguments to function 'getpwnam_r' make[2]: *** [mfschunkserver-main.o] Error 1 make[2]: Leaving directory `/root/mfs-1.6.20-2/mfschunkserver' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/mfs-1.6.20-2' make: *** [all] Error 2 Ideas/Suggestions/Work-Around? |