You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: MarlboroMoo <mar...@gm...> - 2011-05-16 06:23:17
|
never mind, i use "mfsfilerepair" to fix it, thanks :D 2011/5/11 MarlboroMoo <mar...@gm...> > Hi there, > > we use the MFS 1.6.17, and got a error because our network problem, > the message like below: > > currently unavailable chunk 00000000049E301B (inode: 10022376 ; index: 0) >> + currently unavailable reserved file 10022376: >> search/property/some.file.properties >> unavailable chunks: 1 >> unavailable reserved files: 1 > > > how can i solve this problem, remove this file from meta ? > thanks in advance ! > -- > Marlboromoo > -- Marlboromoo |
From: Robin W. <li...@wa...> - 2011-05-15 13:41:14
|
Hi, I've had the same problemen, mine only where updated if the filesize changed, maybe it is the same.. My problem was that I used the "mfscachemode=YES" when mouting, changing this to "mfscachemode=AUTO" solved this for me. Best regards, Robin Op 14-5-2011 4:40, Anh K. Huynh schreef: > Hello, > > I've just encountered a problem with MooseFS. Two of my servers share a same directory /foo/bar/ via a MFS master. The contents of the directory are often updated. The problem is that: > > * when a file of the directory is updated on the first server, > * the client on the second server still sees the old version of that file. > > If the client on the second server exists (by the command 'exit' on SSH terminal), and log in to the server again, they would see the latest version of the file. > > So my question is: How to force all clients to see a same version (the latest one) of a file when that file is updated on any mfs client? > > My MFS setting: one master, three chunk servers, all files have the goal 2, trash files have the goal 1. These servers are located in a 10 MB/s network. > > Thank you for your helps, > > Regards, > |
From: Richard C. <ric...@ai...> - 2011-05-14 19:35:03
|
> Which command do you use to mount on /mfs? > mfsmount -H 10.17.1.220 -S proxmox /mfs/pve Are there options that I should enable for mfsmount which would enable my use case? Are there options for FUSE which might be a factor as well? Thanks, Richard. |
From: Giovanni T. <gt...@li...> - 2011-05-14 10:22:59
|
On Fri, May 13, 2011 at 7:26 PM, Richard Chute <ric...@ai...> wrote: > 5385 open("/mfs/pve/images/101/vm-101-disk-1.raw", > O_RDWR|O_DIRECT|O_CLOEXEC) = -1 EINVAL (Invalid argument) Which command do you use to mount on /mfs? > What version of proxmox are you running? Proxmox VE 1.8 with 2.6.32 kernel |
From: Anh K. H. <ky...@vi...> - 2011-05-14 03:11:26
|
Hello, I've just encountered a problem with MooseFS. Two of my servers share a same directory /foo/bar/ via a MFS master. The contents of the directory are often updated. The problem is that: * when a file of the directory is updated on the first server, * the client on the second server still sees the old version of that file. If the client on the second server exists (by the command 'exit' on SSH terminal), and log in to the server again, they would see the latest version of the file. So my question is: How to force all clients to see a same version (the latest one) of a file when that file is updated on any mfs client? My MFS setting: one master, three chunk servers, all files have the goal 2, trash files have the goal 1. These servers are located in a 10 MB/s network. Thank you for your helps, Regards, -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: Robert D. <ro...@in...> - 2011-05-13 17:38:43
|
I was trying to debug more information and removed the optimization flags (-O2) from the compile. The process has been running so far with no problems. Is this a gcc issue on nexenta or potentially still a code bug? -Rob _____ From: Robert Dye [mailto:ro...@in...] Sent: Tuesday, May 03, 2011 2:19 PM To: 'Michal Borychowski' Cc: moo...@li... Subject: [Moosefs-users] FW: SPAM?: RE: More nexenta bugs OK. I ran mfschunkserver through gdb and listed below is the output (Nexenta): ------------- mfschunkserver daemon initialized properly Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 18] 0xfee4472b in memset () from /lib/libc.so.1 (gdb) bt #0 0xfee4472b in memset () from /lib/libc.so.1 #1 0x00000001 in ?? () #2 0x0805aa4e in hdd_chunkop (chunkid=1874218, version=0, newversion=0, copychunkid=0, copyversion=0, length=1) at hddspacemgr.c:2238 #3 0x0805ed07 in replicate (chunkid=1874218, version=1, srccnt=1 '\001', srcs=0xd6dcda8 "") at replicator.c:452 #4 0x0804b41f in job_worker (th_arg=0xd0af990) at bgjobs.c:204 #5 0xfeecfe33 in _thrp_setup () from /lib/libc.so.1 #6 0xfeed00c0 in ?? () from /lib/libc.so.1 #7 0x00000000 in ?? () (gdb) info threads 27 LWP 25 0xfee44613 in memcpy () from /lib/libc.so.1 26 LWP 24 0xfee44613 in memcpy () from /lib/libc.so.1 25 LWP 23 0xfee44613 in memcpy () from /lib/libc.so.1 24 LWP 22 0xfee44613 in memcpy () from /lib/libc.so.1 23 LWP 21 0xfeed7b95 in mmap64 () from /lib/libc.so.1 22 LWP 20 0xfeed7b95 in mmap64 () from /lib/libc.so.1 21 LWP 19 0xfee44613 in memcpy () from /lib/libc.so.1 20 LWP 18 0xfee4472b in memset () from /lib/libc.so.1 19 LWP 17 0xfee44613 in memcpy () from /lib/libc.so.1 18 LWP 16 0xfeed7b95 in mmap64 () from /lib/libc.so.1 17 LWP 15 0xfeed0119 in __lwp_park () from /lib/libc.so.1 16 LWP 14 0xfeed0119 in __lwp_park () from /lib/libc.so.1 15 LWP 13 0xfeed0119 in __lwp_park () from /lib/libc.so.1 14 LWP 12 0xfeed0119 in __lwp_park () from /lib/libc.so.1 13 LWP 11 0xfeed0119 in __lwp_park () from /lib/libc.so.1 12 LWP 10 0xfeed0119 in __lwp_park () from /lib/libc.so.1 11 LWP 9 0xfeed0119 in __lwp_park () from /lib/libc.so.1 10 LWP 8 0xfeed0119 in __lwp_park () from /lib/libc.so.1 9 LWP 7 0xfeed0119 in __lwp_park () from /lib/libc.so.1 8 LWP 6 0xfeed0119 in __lwp_park () from /lib/libc.so.1 7 LWP 5 0xfeed4cc5 in munmap () from /lib/libc.so.1 6 LWP 4 0xfeed40b5 in __nanosleep () from /lib/libc.so.1 5 LWP 3 0xfeed40b5 in __nanosleep () from /lib/libc.so.1 1 LWP 1 0xfeed4de5 in __pollsys () from /lib/libc.so.1 |
From: Kristofer P. <kri...@cy...> - 2011-05-13 17:31:20
|
Just curious - how has your performance been with that? Have you ran any type of performance benchmarks from within any of the domU's? ----- Original Message ----- From: "Ólafur Ósvaldsson" <osv...@ne...> To: "Richard Chute" <ric...@ai...> Cc: moo...@li... Sent: Friday, May 13, 2011 10:05:54 AM Subject: Re: [Moosefs-users] KVM on MFS Hi, Not sure if this is the same problem as yours, but we are running a decent Xen setup with MFS as the storage for the VM's, it works very well and the only problems we have had was when trying to use directio, if that is disabled it works fine. /Oli On 13.5.2011, at 14:24, Richard Chute wrote: > Hello MFS Devs, > I am evaluating MooseFS for use in various aspects of our business, > and one of them is in use as a storage mechanism for virtual machines. > Specifically, we're using proxmox for virtualization in our environment, > and we seem to be hitting a snag when using KVM containers -- the > virtual machines won't start. We currently believe that this may be > because MFS (or FUSE, possibly) does not allow mmap'ing of files. > I am wondering if anyone can shed some light on this type of > situation (using MFS for storage of KVM virtual machines) and/or if > anyone has any known or possibly unknown issues with mmap'ing files on MFS. > > Thanks, > Richard. > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 ------------------------------------------------------------------------------ Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Richard C. <ric...@ai...> - 2011-05-13 17:26:39
|
Hi Giovanni, Thank you for the encouragement... at least I know someone out there has done this successfully. We're running Proxmox VE 1.8, and this is the exact error message we're getting: 5385 open("/mfs/pve/images/101/vm-101-disk-1.raw", O_RDWR|O_DIRECT|O_CLOEXEC) = -1 EINVAL (Invalid argument) What version of proxmox are you running? Thanks, Richard. On 11-05-13 12:45 PM, Giovanni Toraldo wrote: > Hi Richard, > > On 13/05/2011 16:24, Richard Chute wrote: >> Specifically, we're using proxmox for virtualization in our environment, >> and we seem to be hitting a snag when using KVM containers -- the >> virtual machines won't start. We currently believe that this may be >> because MFS (or FUSE, possibly) does not allow mmap'ing of files. >> I am wondering if anyone can shed some light on this type of >> situation (using MFS for storage of KVM virtual machines) and/or if >> anyone has any known or possibly unknown issues with mmap'ing files on MFS. > I am managing a Proxmox cluster of 2 nodes with raw disk images on a MFS > volume, and I haven't any problem with it. > > Actually I am mounting the mfs volume via an > /etc/network/if-up.d/moosefs-mount.sh, that mount the volume under > /srv/mfs, and on Proxmox I configured a new storage volume for virtual > images, that's all. > > Bye. > |
From: Giovanni T. <gt...@li...> - 2011-05-13 15:46:05
|
Hi Richard, On 13/05/2011 16:24, Richard Chute wrote: > Specifically, we're using proxmox for virtualization in our environment, > and we seem to be hitting a snag when using KVM containers -- the > virtual machines won't start. We currently believe that this may be > because MFS (or FUSE, possibly) does not allow mmap'ing of files. > I am wondering if anyone can shed some light on this type of > situation (using MFS for storage of KVM virtual machines) and/or if > anyone has any known or possibly unknown issues with mmap'ing files on MFS. I am managing a Proxmox cluster of 2 nodes with raw disk images on a MFS volume, and I haven't any problem with it. Actually I am mounting the mfs volume via an /etc/network/if-up.d/moosefs-mount.sh, that mount the volume under /srv/mfs, and on Proxmox I configured a new storage volume for virtual images, that's all. Bye. -- Giovanni Toraldo http://www.libersoft.it/ |
From: Ólafur Ó. <osv...@ne...> - 2011-05-13 15:24:33
|
Hi, Not sure if this is the same problem as yours, but we are running a decent Xen setup with MFS as the storage for the VM's, it works very well and the only problems we have had was when trying to use directio, if that is disabled it works fine. /Oli On 13.5.2011, at 14:24, Richard Chute wrote: > Hello MFS Devs, > I am evaluating MooseFS for use in various aspects of our business, > and one of them is in use as a storage mechanism for virtual machines. > Specifically, we're using proxmox for virtualization in our environment, > and we seem to be hitting a snag when using KVM containers -- the > virtual machines won't start. We currently believe that this may be > because MFS (or FUSE, possibly) does not allow mmap'ing of files. > I am wondering if anyone can shed some light on this type of > situation (using MFS for storage of KVM virtual machines) and/or if > anyone has any known or possibly unknown issues with mmap'ing files on MFS. > > Thanks, > Richard. > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 |
From: Thomas S H. <tha...@gm...> - 2011-05-13 14:50:19
|
I am not familiar with using proxmox for managing KVM, I have always used libvirt. When running KVM virtual machines with the storage on MooseFS I have had no problems, I have used QEMU 0.12 through 0.14. Can you manually start the KVM virtual machines via the qemu-kvm command? How does proxmox manage/run the virtual machines? At the end of the day KVM should have no problems running virtual machines on MooseFS, I have been doing it for quite some time. What is your underlying OS for your hypervisors and Moosefs chunkservers? What version of MooseFS are you using? -Thomas S Hatch On Fri, May 13, 2011 at 8:24 AM, Richard Chute <ric...@ai...>wrote: > Hello MFS Devs, > I am evaluating MooseFS for use in various aspects of our business, > and one of them is in use as a storage mechanism for virtual machines. > Specifically, we're using proxmox for virtualization in our environment, > and we seem to be hitting a snag when using KVM containers -- the > virtual machines won't start. We currently believe that this may be > because MFS (or FUSE, possibly) does not allow mmap'ing of files. > I am wondering if anyone can shed some light on this type of > situation (using MFS for storage of KVM virtual machines) and/or if > anyone has any known or possibly unknown issues with mmap'ing files on MFS. > > Thanks, > Richard. > > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Richard C. <ric...@ai...> - 2011-05-13 14:39:49
|
Hello MFS Devs, I am evaluating MooseFS for use in various aspects of our business, and one of them is in use as a storage mechanism for virtual machines. Specifically, we're using proxmox for virtualization in our environment, and we seem to be hitting a snag when using KVM containers -- the virtual machines won't start. We currently believe that this may be because MFS (or FUSE, possibly) does not allow mmap'ing of files. I am wondering if anyone can shed some light on this type of situation (using MFS for storage of KVM virtual machines) and/or if anyone has any known or possibly unknown issues with mmap'ing files on MFS. Thanks, Richard. |
From: Papp T. <to...@ma...> - 2011-05-12 11:38:57
|
On 05/11/2011 11:19 AM, Giovanni Toraldo wrote: > Are you using 10.04 packages on 11.04? This isn't a so good thing. You > need to recompile the package on natty. Still it's working with no prorblems. I don't want to touch it does not change. > ps: I am the maintainer of that PPA, I haven't had any problems so far > on lucid nodes. Nice, thank you:) tamas |
From: Papp T. <to...@ma...> - 2011-05-12 11:36:58
|
On 05/11/2011 01:27 AM, Robert Dye wrote: > Try running it in GDB to get more useful info for the developers: > > gdb mfsmetalogger > (gdb)run -with-your-options > > When it segfaults you should be able to type > > bt > info threads > > And resend for someone to help you quicker. Thank you for your help. Still I could not reproduce. tamas |
From: Giovanni T. <gt...@li...> - 2011-05-11 09:19:24
|
Hi, On 11/05/2011 01:07, Papp Tamas wrote: > ii mfs-metalogger 1.6.20-2ubuntu1 > MooseFS metalogger server > from: > deb http://ppa.launchpad.net/bncf-mnt/musa/ubuntu lucid main > > Ubuntu 10.04 amd64 > > The master server is 11.04 with the same packages as above. Are you using 10.04 packages on 11.04? This isn't a so good thing. You need to recompile the package on natty. ps: I am the maintainer of that PPA, I haven't had any problems so far on lucid nodes. Bye. -- Giovanni Toraldo http://www.libersoft.it/ |
From: <wk...@bn...> - 2011-05-11 05:53:44
|
Greetings: We are testing MFS and have been very impressed so far. One discussion that we are having internally is the desireability to use RAID (either Raid0, Raid1 or Raid10) on chunkservers. Obviously with a goal of 2 or more, its not a protection issue, which is the classic RAID scenario. as we see it, RAID on chunkservers has the following to recommend it. + stability: Assuming RAID1/10, if a drive dies the chunkserver doesn't immediately fall out of the cluster, and forcing a need to rebalance. We can fail it deliberately at a time of our choosing to replace the drive. + less disks: with RAID1/10 we might feel better using a goal of 2 instead of 3. + speed: RAID0 would be faster outright. RAID1/10 may provide faster on reads and slightly slower on writes (RAID 10 much better). The reasons against would be: - stability: with RAID0, the loss of a single drive kills the entire chunkserver. - stability: another layer to fail (the MD layer). - cost: increased power consumption and of course 2x or more the number of drives. We would be using Linux SoftRaid (MD driver) rather than hardware raid, so the card cost is not an issue. - speed: slightly slower writes on RAID1/10 So what is the consensus of the more experienced users? Are you using RAID (0,1,10 or others) on your chunkservers? Are we missing something on the above analysis? -bill |
From: MarlboroMoo <mar...@gm...> - 2011-05-11 01:18:49
|
Hi there, we use the MFS 1.6.17, and got a error because our network problem, the message like below: currently unavailable chunk 00000000049E301B (inode: 10022376 ; index: 0) > + currently unavailable reserved file 10022376: > search/property/some.file.properties > unavailable chunks: 1 > unavailable reserved files: 1 how can i solve this problem, remove this file from meta ? thanks in advance ! -- Marlboromoo |
From: Papp T. <to...@ma...> - 2011-05-10 23:07:13
|
hi! I see this on an mfs slave server. May 11 00:06:43 backup0 mfsmetalogger[690]: set gid to 113 May 11 00:06:43 backup0 mfsmetalogger[690]: set uid to 106 May 11 00:06:43 backup0 mfsmetalogger[690]: connecting ... May 11 00:06:43 backup0 mfsmetalogger[690]: open files limit: 5000 May 11 00:06:43 backup0 mfsmetalogger[690]: connected to Master May 11 00:06:43 backup0 mfsmetalogger[690]: metadata downloaded 95B/0.000559s (0.170 MB/s) May 11 00:06:43 backup0 mfsmetalogger[690]: changelog_0 downloaded 76B/0.000542s (0.140 MB/s) May 11 00:06:43 backup0 mfsmetalogger[690]: changelog_1 downloaded 0B/0.000001s (0.000 MB/s) May 11 00:06:43 backup0 mfsmetalogger[690]: sessions downloaded 354B/0.000387s (0.915 MB/s) May 11 00:07:56 backup0 mfsmetalogger[690]: connection was reset by Master May 11 00:08:00 backup0 mfsmetalogger[690]: connecting ... May 11 00:08:00 backup0 mfsmetalogger[690]: connected to Master May 11 00:08:00 backup0 mfsmetalogger[690]: metadata downloaded 95B/0.000505s (0.188 MB/s) May 11 00:08:00 backup0 mfsmetalogger[690]: changelog_0 downloaded 0B/0.000001s (0.000 MB/s) May 11 00:08:00 backup0 mfsmetalogger[690]: changelog_1 downloaded 126B/0.000510s (0.247 MB/s) May 11 00:08:00 backup0 mfsmetalogger[690]: sessions downloaded 527B/0.000518s (1.017 MB/s) May 11 00:09:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000604s (1.445 MB/s) May 11 00:10:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000517s (1.689 MB/s) May 11 00:11:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000698s (1.251 MB/s) May 11 00:12:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000731s (1.194 MB/s) May 11 00:13:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000666s (1.311 MB/s) May 11 00:14:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000720s (1.212 MB/s) May 11 00:15:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000662s (1.319 MB/s) May 11 00:16:00 backup0 mfsmetalogger[690]: sessions downloaded 873B/0.000720s (1.212 MB/s) May 11 00:16:37 backup0 mfsmetalogger[690]: connection was reset by Master May 11 00:16:40 backup0 mfsmetalogger[690]: connecting ... May 11 00:16:40 backup0 mfsmetalogger[690]: connection failed, error: ECONNREFUSED (Connection refused) May 11 00:16:41 backup0 kernel: [2881668.049752] mfsmetalogger[690]: segfault at 7f80a7e48f00 ip 00007f80a7b34f75 sp 00007ffffcb3daa0 error 7 in libc-2.11.1.so[7f80a7acd000+17a000] ii mfs-metalogger 1.6.20-2ubuntu1 MooseFS metalogger server from: deb http://ppa.launchpad.net/bncf-mnt/musa/ubuntu lucid main Ubuntu 10.04 amd64 The master server is 11.04 with the same packages as above. Is this a known bug? Thank you, tamas |
From: twesley1981 <twe...@us...> - 2011-05-04 10:11:07
|
Hi everybody,I have a question about makesnapshot. Description: Master、Chunk、Clint MooseFS Version:1.6.15 Master、Chunk、Clint Operating System:CentOS 5.5 x86_64 Master、Chunk、Clint Filesystem:ext3 I have a client that it is a web server,and I mount(via mfsmount) a SRC directory of master server on local TEST directory. And I will make a snapshot(via mfsmakesnapshot) with SRC directory that it contains about 5,600,000 files. If I take a snapshot and client aceess a file of TEST directory at the same time, I always get a slow response about accessing files. Does anyone have any suggestion to help me ? BTW, how can I get a consultant to support MooseFS in a company? (Donate or pay for consulting) |
From: Wesley Shen. <wcs...@gm...> - 2011-05-04 07:20:53
|
Hi all, I have a question about mfsmakesnapshot. Description: Master、Chunk、Clint MooseFS Version:1.6.15 Master、Chunk、Clint Operating System:CentOS 5.5 x86_64 Master、Chunk、Clint Filesystem:ext3 I have a client that it is a web server,and I mount(via mfsmount) a SRC directory of mfs master server on local TEST directory. And I will make a snapshot(via mfsmakesnapshot) with SRC directory that it contains about 5,600,000 files. If I take a snapshot on mfs master server and client aceess a file of TEST directory at the same time, I always get a slow response about accessing files. Does anyone have any suggestion to resolve the question ? BTW, how can I get a consultant to support MooseFS in a company? (Donate or pay for consulting) -- .:: Best Regards ::. |
From: Michal B. <mic...@ge...> - 2011-05-04 07:16:45
|
Hi! Thanks - this debug is very helpful. Probably we'll manage to fix it. Kind regards Michal From: Robert Dye [mailto:ro...@in...] Sent: Tuesday, May 03, 2011 11:19 PM To: 'Michal Borychowski' Cc: moo...@li... Subject: [Moosefs-users] FW: SPAM?: RE: More nexenta bugs OK. I ran mfschunkserver through gdb and listed below is the output (Nexenta): ------------- mfschunkserver daemon initialized properly Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 18] 0xfee4472b in memset () from /lib/libc.so.1 (gdb) bt #0 0xfee4472b in memset () from /lib/libc.so.1 #1 0x00000001 in ?? () #2 0x0805aa4e in hdd_chunkop (chunkid=1874218, version=0, newversion=0, copychunkid=0, copyversion=0, length=1) at hddspacemgr.c:2238 #3 0x0805ed07 in replicate (chunkid=1874218, version=1, srccnt=1 '\001', srcs=0xd6dcda8 "") at replicator.c:452 #4 0x0804b41f in job_worker (th_arg=0xd0af990) at bgjobs.c:204 #5 0xfeecfe33 in _thrp_setup () from /lib/libc.so.1 #6 0xfeed00c0 in ?? () from /lib/libc.so.1 #7 0x00000000 in ?? () (gdb) info threads 27 LWP 25 0xfee44613 in memcpy () from /lib/libc.so.1 26 LWP 24 0xfee44613 in memcpy () from /lib/libc.so.1 25 LWP 23 0xfee44613 in memcpy () from /lib/libc.so.1 24 LWP 22 0xfee44613 in memcpy () from /lib/libc.so.1 23 LWP 21 0xfeed7b95 in mmap64 () from /lib/libc.so.1 22 LWP 20 0xfeed7b95 in mmap64 () from /lib/libc.so.1 21 LWP 19 0xfee44613 in memcpy () from /lib/libc.so.1 20 LWP 18 0xfee4472b in memset () from /lib/libc.so.1 19 LWP 17 0xfee44613 in memcpy () from /lib/libc.so.1 18 LWP 16 0xfeed7b95 in mmap64 () from /lib/libc.so.1 17 LWP 15 0xfeed0119 in __lwp_park () from /lib/libc.so.1 16 LWP 14 0xfeed0119 in __lwp_park () from /lib/libc.so.1 15 LWP 13 0xfeed0119 in __lwp_park () from /lib/libc.so.1 14 LWP 12 0xfeed0119 in __lwp_park () from /lib/libc.so.1 13 LWP 11 0xfeed0119 in __lwp_park () from /lib/libc.so.1 12 LWP 10 0xfeed0119 in __lwp_park () from /lib/libc.so.1 11 LWP 9 0xfeed0119 in __lwp_park () from /lib/libc.so.1 10 LWP 8 0xfeed0119 in __lwp_park () from /lib/libc.so.1 9 LWP 7 0xfeed0119 in __lwp_park () from /lib/libc.so.1 8 LWP 6 0xfeed0119 in __lwp_park () from /lib/libc.so.1 7 LWP 5 0xfeed4cc5 in munmap () from /lib/libc.so.1 6 LWP 4 0xfeed40b5 in __nanosleep () from /lib/libc.so.1 5 LWP 3 0xfeed40b5 in __nanosleep () from /lib/libc.so.1 1 LWP 1 0xfeed4de5 in __pollsys () from /lib/libc.so.1 |
From: Michal B. <mic...@ge...> - 2011-05-04 06:42:16
|
Hi! >From the screenshot one can see that you have lots of chunks (and thereof files) with goal=1. If you lost one chunkserver, you also lost some of the files. After a while you'll see a list of corrupted files. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Max Cantor [mailto:mxc...@gm...] Sent: Sunday, May 01, 2011 6:33 PM To: Steve Cc: moo...@li... Subject: Re: [Moosefs-users] filesystem check If the "filesystem check info" box reports "No Data" what does that mean? I lost one of my chunkservers and based on the screen shot (about 6k chunks have 0 valid copies), it would seem that some data (that was consciously not replicated) has been lost. but, i can't figure out which files are gone. max |
From: Robert D. <ro...@in...> - 2011-05-03 21:18:49
|
OK. I ran mfschunkserver through gdb and listed below is the output (Nexenta): ------------- mfschunkserver daemon initialized properly Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 18] 0xfee4472b in memset () from /lib/libc.so.1 (gdb) bt #0 0xfee4472b in memset () from /lib/libc.so.1 #1 0x00000001 in ?? () #2 0x0805aa4e in hdd_chunkop (chunkid=1874218, version=0, newversion=0, copychunkid=0, copyversion=0, length=1) at hddspacemgr.c:2238 #3 0x0805ed07 in replicate (chunkid=1874218, version=1, srccnt=1 '\001', srcs=0xd6dcda8 "") at replicator.c:452 #4 0x0804b41f in job_worker (th_arg=0xd0af990) at bgjobs.c:204 #5 0xfeecfe33 in _thrp_setup () from /lib/libc.so.1 #6 0xfeed00c0 in ?? () from /lib/libc.so.1 #7 0x00000000 in ?? () (gdb) info threads 27 LWP 25 0xfee44613 in memcpy () from /lib/libc.so.1 26 LWP 24 0xfee44613 in memcpy () from /lib/libc.so.1 25 LWP 23 0xfee44613 in memcpy () from /lib/libc.so.1 24 LWP 22 0xfee44613 in memcpy () from /lib/libc.so.1 23 LWP 21 0xfeed7b95 in mmap64 () from /lib/libc.so.1 22 LWP 20 0xfeed7b95 in mmap64 () from /lib/libc.so.1 21 LWP 19 0xfee44613 in memcpy () from /lib/libc.so.1 20 LWP 18 0xfee4472b in memset () from /lib/libc.so.1 19 LWP 17 0xfee44613 in memcpy () from /lib/libc.so.1 18 LWP 16 0xfeed7b95 in mmap64 () from /lib/libc.so.1 17 LWP 15 0xfeed0119 in __lwp_park () from /lib/libc.so.1 16 LWP 14 0xfeed0119 in __lwp_park () from /lib/libc.so.1 15 LWP 13 0xfeed0119 in __lwp_park () from /lib/libc.so.1 14 LWP 12 0xfeed0119 in __lwp_park () from /lib/libc.so.1 13 LWP 11 0xfeed0119 in __lwp_park () from /lib/libc.so.1 12 LWP 10 0xfeed0119 in __lwp_park () from /lib/libc.so.1 11 LWP 9 0xfeed0119 in __lwp_park () from /lib/libc.so.1 10 LWP 8 0xfeed0119 in __lwp_park () from /lib/libc.so.1 9 LWP 7 0xfeed0119 in __lwp_park () from /lib/libc.so.1 8 LWP 6 0xfeed0119 in __lwp_park () from /lib/libc.so.1 7 LWP 5 0xfeed4cc5 in munmap () from /lib/libc.so.1 6 LWP 4 0xfeed40b5 in __nanosleep () from /lib/libc.so.1 5 LWP 3 0xfeed40b5 in __nanosleep () from /lib/libc.so.1 1 LWP 1 0xfeed4de5 in __pollsys () from /lib/libc.so.1 |
From: Max C. <mxc...@gm...> - 2011-05-01 16:32:59
|
If the "filesystem check info" box reports "No Data" what does that mean? I lost one of my chunkservers and based on the screen shot (about 6k chunks have 0 valid copies), it would seem that some data (that was consciously not replicated) has been lost. but, i can't figure out which files are gone. max |
From: Fyodor U. <uf...@uf...> - 2011-05-01 04:57:49
|
On 04/30/2011 05:57 PM, Steve wrote: > 1. Yes How long wait this moment? After 24h I still have these chunks. > 2. Don't mess about with it. No need. > > > > > > > > > > -------Original Message------- > > > > From: Fyodor Ustinov > > Date: 04/30/11 14:47:37 > > To: moo...@li... > > Subject: [Moosefs-users] missed chunks > > > > Hi. > > > > 1. I've marked the hdd on chunk server as offline and restarted chunkserver. > > > 2. Stop chunkserver. > > 3. Work with cluster (create files, delete files, and so on). Delete all > > files, as result - 0 chunks on cluster. > > 4. Unmark hdd and start chunkserver. > > > > Now, I see in mfs.cgi 5 chunks with "valid copies == 1" and "goal == 0" > > from readded chunkserver. Passed 10 hours. > > > > Two questions: > > > > 1. Should these chunks purge automatically (and after a while)? > > 2. How to delete these chunks by hand? > > > > WBR, > > Fyodor. > > > > ----------------------------------------------------------------------------- > > > WhatsUp Gold - Download Free Network Management Software > > The most intuitive, comprehensive, and cost-effective network > > management toolset available today. Delivers lowest initial > > acquisition cost and overall TCO of any competing solution. > > http://p.sf.net/sfu/whatsupgold-sd > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |