You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Flow J. <fl...@gm...> - 2011-02-28 14:35:28
|
We see the same crash issue with MFS and OO 3.1 as described http://openoffice.org/bugzilla/show_bug.cgi?id=113207&historysort=new. And as all of our servers / workstations are CentOS 5.5 and Fedora 12, it won't be an easy task to go OO 3.2. A work around is to add following lines in /usr/lib64/openoffice.org3/program/soffice: if [ -e /home3/$USER ] ; then export HOME=/home3/$USER fi Our /home3 is the original NFS mounted home directory and it could also be any other local file system directories. Flow On 02/28/2011 10:11 PM, Giovanni Toraldo wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi Steve, > > Il 25/02/2011 17:24, Steve Wilson ha scritto: >> I'm glad to hear that it's working for you but confused that a very >> similar configuration isn't working for us. > I've checked dmesg, and I found some: > > [262517.265151] oosplash.bin[6009]: segfault at b664bae0 ip 0804b5e5 sp > bf88f8a0 error 6 in oosplash.bin[8048000+6000] > [269154.491051] oosplash.bin[6693]: segfault at b65d2ae0 ip 0804b5e5 sp > bfc26060 error 6 in oosplash.bin[8048000+6000] > [269198.971353] oosplash.bin[6732]: segfault at b65beae0 ip 0804b5e5 sp > bfba7d80 error 6 in oosplash.bin[8048000+6000] > > however openoffice (1:3.2.0-7ubuntu4.2) seems working without any other > problem. > > Bye. > > - -- > Giovanni Toraldo > http://www.libersoft.it/ > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQIcBAEBAgAGBQJNa6z1AAoJEGrHv689I8Z8UawQAJc/oRE2xNuEM4ZQ6ocwbp1Z > rCU0fshJ71TgYwTVrc5ASNCA37qa0RVydzeZAyEaONeUE+vfb+5q0VPgnPRSANDG > fXFOpFbf2sN9kkMlYtOhzSDLVu6UaoIbDkdRbMT6pr3Ux8q0hhqTMRN0/tI7zwkw > aqLdyi8pqctHNlEnGkOii7eV1oObNPKCHpNzWB2mMyOdY7kSpkbqPZ+rjZYc0EwK > e9YcEM7Tp3fceMsuQUIJnTNeFAnXlfIcRR+UvWXiRfeKW+RlV+CJ/N7+hJ4TQLJw > i4yJ9N+IWUoF3mlPLVnV8IjOz5aUoz+OzXuhBQZ+yJrXsGBxlm9zI1RoQHODx17L > psJzG88hOZzfmOebhq9CHWvFhDotHfMms3aYVmdvzezqarH3l031YePPXjsmtF6/ > DHM30sloGUSNjM+qvapQpvgguGkshlggaBtg/fsoeuLPKTu3nbc/mEcK4GZ8hNio > toWPWh0QJ9yK58Hp5N+U/E3OP3ZWGGrWPtDF8BNz1LJQs9I8fPld/LQ3mIpQ2SGA > pd07Kck/FXsHaX+1VsE6HO6/EGIcGG/zOISsGgZVDTq0d21EgG/6Pk+hkEhi8+e8 > LDOaeZUEbVS6NILpm0MtiyjEFMWx3oDjhRvzkbfoPMR9IDqqARDMgtF/Q/U9qXAN > bV6Z8u6q/VtZ01uKRM4/ > =KFge > -----END PGP SIGNATURE----- > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Giovanni T. <gt...@li...> - 2011-02-28 14:11:12
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Steve, Il 25/02/2011 17:24, Steve Wilson ha scritto: > I'm glad to hear that it's working for you but confused that a very > similar configuration isn't working for us. I've checked dmesg, and I found some: [262517.265151] oosplash.bin[6009]: segfault at b664bae0 ip 0804b5e5 sp bf88f8a0 error 6 in oosplash.bin[8048000+6000] [269154.491051] oosplash.bin[6693]: segfault at b65d2ae0 ip 0804b5e5 sp bfc26060 error 6 in oosplash.bin[8048000+6000] [269198.971353] oosplash.bin[6732]: segfault at b65beae0 ip 0804b5e5 sp bfba7d80 error 6 in oosplash.bin[8048000+6000] however openoffice (1:3.2.0-7ubuntu4.2) seems working without any other problem. Bye. - -- Giovanni Toraldo http://www.libersoft.it/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQIcBAEBAgAGBQJNa6z1AAoJEGrHv689I8Z8UawQAJc/oRE2xNuEM4ZQ6ocwbp1Z rCU0fshJ71TgYwTVrc5ASNCA37qa0RVydzeZAyEaONeUE+vfb+5q0VPgnPRSANDG fXFOpFbf2sN9kkMlYtOhzSDLVu6UaoIbDkdRbMT6pr3Ux8q0hhqTMRN0/tI7zwkw aqLdyi8pqctHNlEnGkOii7eV1oObNPKCHpNzWB2mMyOdY7kSpkbqPZ+rjZYc0EwK e9YcEM7Tp3fceMsuQUIJnTNeFAnXlfIcRR+UvWXiRfeKW+RlV+CJ/N7+hJ4TQLJw i4yJ9N+IWUoF3mlPLVnV8IjOz5aUoz+OzXuhBQZ+yJrXsGBxlm9zI1RoQHODx17L psJzG88hOZzfmOebhq9CHWvFhDotHfMms3aYVmdvzezqarH3l031YePPXjsmtF6/ DHM30sloGUSNjM+qvapQpvgguGkshlggaBtg/fsoeuLPKTu3nbc/mEcK4GZ8hNio toWPWh0QJ9yK58Hp5N+U/E3OP3ZWGGrWPtDF8BNz1LJQs9I8fPld/LQ3mIpQ2SGA pd07Kck/FXsHaX+1VsE6HO6/EGIcGG/zOISsGgZVDTq0d21EgG/6Pk+hkEhi8+e8 LDOaeZUEbVS6NILpm0MtiyjEFMWx3oDjhRvzkbfoPMR9IDqqARDMgtF/Q/U9qXAN bV6Z8u6q/VtZ01uKRM4/ =KFge -----END PGP SIGNATURE----- |
From: Michal B. <mic...@ge...> - 2011-02-28 12:36:30
|
Hi Heiko! You are definitely right! I made a mistake writing all chunks of the file with goal=1 would reside just on one chunkserver. Each chunk of the file would go (more or less by random) to different chunkservers. On the other hand we again focus on the point that using goal=1 is almost useless. Unless these are some temporary, unimportant files. The expectation of distributed file system is to keep at least two copies of the file :) Thank you for being conscious :) Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Heiko Schröter [mailto:sch...@iu...] Sent: Saturday, February 26, 2011 2:28 PM To: mic...@ge... Cc: moo...@li... Subject: Re: [Moosefs-users] Chunks On Fri, 25 Feb 2011 11:21:33 +0100 "Michal Borychowski" < <mailto:mic...@ge...> mic...@ge...> wrote: > Hi! > > If you set goal=1, all chunks belonging to the same file will go to the same > chunk server. Supposed to, but it does not work this way. In our test setup (one master, one client, two chunkservers) the chunks are beeing spread across the two chunkservers, even with "goal=1". df on on the chunksrevr clearly shows that the files are spread evenly across them. mfsfileinfo shows that the chunks are placed alternatingly on the chunk servers. Incrementing "goal" only increases the number of copies. Screenshots can be supplied on tuesday since our server room is under reconstruction over the weekend. Heiko |
From: Michal B. <mic...@ge...> - 2011-02-28 12:01:39
|
Hi! You can just use the “mfsfileinfo” command as you written. If you need only the addresses you can somehow parse the output of this function. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Yuri [mailto:yur...@gm...] Sent: Thursday, February 10, 2011 6:52 AM To: moo...@li... Subject: [Moosefs-users] How can i get storage map? Hi all: How can I get the chunk server where the copied files put in ? For example: G1# mfsfileinfo 100M 100M: chunk 0: 0000000000000025_00000001 / (id:37 ver:1) copy 1: 1.1.1.91:9422 copy 2: 1.1.1.93:9422 copy 3: 1.1.1.94:9422 I need to get the result:91,93,94.Can I get it from Master Server directly? It's better if my SQL database also gets this map table.Is there any way to achieve it? Yuri,BG4HKQ,2011-02-10 |
From: Michal B. <mic...@ge...> - 2011-02-28 11:58:15
|
Hi! No, this is not an error, the code is good. The “tcptoread” function reads exactly “leng” bytes but it cannot wait for data longer than “msecto” milliseconds. The function returns number of bytes read. Appearance of EOF while reading is equivalent to an error. If we want to read 8 bytes and receive just 6 and the connection got broken it is not important for me if it is 6, 4 or 0 – it is important it’s less than 8. In other places in the code values returned by this function are compared to the length passed as a parameter and whenever value received is different, it is treated as an error. So to be honest the change you proposed doesn’t change anything. I hope the explanation is quite clear.) Thank you Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Lin Yang [mailto:id...@gm...] Sent: Saturday, February 19, 2011 6:25 PM To: moo...@li... Subject: [Moosefs-users] A bug in mfs in mfscommon/sockets.c : line 381 int32_t tcptoread(int sock,void *buff,uint32_t leng,uint32_t msecto) { uint32_t rcvd=0; int i; struct pollfd pfd; pfd.fd = sock; pfd.events = POLLIN; while (rcvd<leng) { pfd.revents = 0; if (poll(&pfd,1,msecto)<0) { return -1; } if (pfd.revents & POLLIN) { i = read(sock,((uint8_t*)buff)+rcvd,leng-rcvd); if (i<=0) { //this code work only in Long Connection without EOF return i; } rcvd+=i; } else { errno = ETIMEDOUT; return -1; } } return rcvd; } ------------------------------------------------- should be replaced with : if (i==0) { //EOF return rcvd; } if (i<0) { return i; } -- 杨林 中科院计算技术研究所 15811038200 id...@gm... http://idning.javaeye.com/ |
From: Michal B. <mic...@ge...> - 2011-02-28 11:42:15
|
Hi! Unfortunately FUSE for FreeBSD has some errors here and there. For the moment we are unable to give any solution. These cases should be reported to the author of the FUSE for FreeBSD. We could suggest downgrade to FreeBSD 7.x or switching to Linux. Unfortunately even on FreeBSD 7.x we hit some strange issues (very seldom, but still). We think there could be some race conditions in FUSE module. Unfortunately these are very rare scenarios, very difficult to repeat. Mfsmount is very stable on Linus and has been thoroughly tested in valgrind against memory problems and race conditions. So we think the below instabilites are not from MooseFS directly. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Raymond Jimenez [mailto:ray...@ca...] Sent: Monday, February 21, 2011 8:33 PM To: moo...@li... Subject: [Moosefs-users] MooseFS clients on FreeBSD, stability issues Hello, We're currently running an MFS client on FreeBSD 8-STABLE, and we're experiencing severe issues with stability. For example, if somebody scp's to the MFS system locally, we have a repeatable crash dump[1]. At other times, Samba will lock up with a page fault; the problem seems deep enough that it sometime refuses to let the kernel dump core. It seems it might be a FUSE issue, since the crash is consistently in the fuse4bsd module, but I'm wondering if you guys know of any workarounds, or if we should simply switch our clients over to Linux. I've looked at several other mailing posts that seem to point towards instabilities[2,3] in fuse4bsd, but it seems either nobody is interested in fixing them, or they don't occur often enough to bother people. Thanks, Raymond Jimenez [1] Key portions of the crash dump are included below; more info available at http://lenin.caltech.edu/~raymondj/core.txt.4 [2] http://lists.freebsd.org/pipermail/freebsd-current/2009-September/011659.htm l [3] http://www.mail-archive.com/fre...@fr.../msg126051.html (IIRC, we did retry rebuilding our module; we think it may be the unmaintained code clashing with new things in 8.2) panic: vm_fault: fault on nofault entry, addr: ffffff843e281000 GNU gdb 6.1.1 [FreeBSD] Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "amd64-marcel-freebsd"... Unread portion of the kernel message buffer: panic: vm_fault: fault on nofault entry, addr: ffffff843e281000 cpuid = 6 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a kdb_backtrace() at kdb_backtrace+0x37 panic() at panic+0x182 vm_fault() at vm_fault+0x1f38 trap_pfault() at trap_pfault+0x308 trap() at trap+0x32f calltrap() at calltrap+0x8 --- trap 0xc, rip = 0xffffffff8066995b, rsp = 0xffffff8488c8d890, rbp = 0xffffff8488c8d910 --- copyout() at copyout+0x3b fusedev_read() at fusedev_read+0x1b3 devfs_read_f() at devfs_read_f+0x81 dofileread() at dofileread+0x88 kern_readv() at kern_readv+0x52 read() at read+0x4e syscallenter() at syscallenter+0x1d2 syscall() at syscall+0x40 Xfast_syscall() at Xfast_syscall+0xe2 and Loaded symbols for /usr/local/modules/fuse.ko #0 doadump () at pcpu.h:224 224 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump () at pcpu.h:224 #1 0xffffffff8041be83 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:419 #2 0xffffffff8041c2f2 in panic (fmt=Variable "fmt" is not available. ) at /usr/src/sys/kern/kern_shutdown.c:592 #3 0xffffffff806330df in vm_fault (map=0xffffff0001000000, vaddr=18446743542176419840, fault_type=1 '\001', fault_flags=0) at /usr/src/sys/vm/vm_fault.c:283 #4 0xffffffff8066b7d7 in trap_pfault (frame=0xffffff8488c8d7e0, usermode=0) at /usr/src/sys/amd64/amd64/trap.c:688 #5 0xffffffff8066bbcb in trap (frame=0xffffff8488c8d7e0) at /usr/src/sys/amd64/amd64/trap.c:449 #6 0xffffffff80654188 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:224 #7 0xffffffff8066995b in copyout () at /usr/src/sys/amd64/amd64/support.S:258 #8 0xffffffff8042377b in uiomove (cp=0xffffff843e27f000, n=16384, uio=0xffffff8488c8daa0) at /usr/src/sys/kern/kern_subr.c:168 #9 0xffffffff80e23c58 in fusedev_read () from /usr/local/modules/fuse.ko #10 0xffffffff803a1a43 in devfs_read_f (fp=0x1, uio=0xffffff00207da800, cred=Variable "cred" is not available. ) at /usr/src/sys/fs/devfs/devfs_vnops.c:1084 #11 0xffffffff8045e109 in dofileread (td=0xffffff01150e78c0, fd=5, fp=0xffffff011508b0f0, auio=0xffffff8488c8daa0, offset=Variable "offset" is not available. ) at file.h:227 #12 0xffffffff8045e401 in kern_readv (td=0xffffff01150e78c0, fd=5, auio=0xffffff8488c8daa0) at /usr/src/sys/kern/sys_generic.c:238 #13 0xffffffff8045e4d8 in read (td=Variable "td" is not available. ) at /usr/src/sys/kern/sys_generic.c:154 #14 0xffffffff80459b78 in syscallenter (td=0xffffff01150e78c0, sa=0xffffff8488c8dba0) at /usr/src/sys/kern/subr_trap.c:315 #15 0xffffffff8066b81f in syscall (frame=0xffffff8488c8dc40) at /usr/src/sys/amd64/amd64/trap.c:888 #16 0xffffffff80654462 in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:377 #17 0x0000000800b77e9c in ?? () Previous frame inner to this frame (corrupt stack?) (kgdb) -- Raymond Jimenez <ray...@ca...> http://shia.wsyntax.com <> http://fusion.wsyntax.com ---------------------------------------------------------------------------- -- The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: Pinpoint memory and threading errors before they happen. Find and fix more than 250 security defects in the development cycle. Locate bottlenecks in serial and parallel code that limit performance. http://p.sf.net/sfu/intel-dev2devfeb _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Flow J. <fl...@gm...> - 2011-02-28 11:15:32
|
Hi, We had the similar issue recently and the symptom was it took a long time for mfsmaster to start (but it eventually gets up and running, after about 5mins). Here are what I did to make mfsmaster happy after it starts again: 1. Use the script provided at http://sourceforge.net/tracker/?func=detail&aid=3104619&group_id=228631&atid=1075722 to release all reserved files. (Should comment the optional section in it to speed up the process) 2. Delete all the nonexistent trunks on the trunk server. I'm not mfs expert but these steps do make our mfsmaster server happy and it now loads has about 700M metadata in 5 seconds, no error in log file. I'm also curious about the *official solution* from Michal :) Thanks Flow On 02/28/2011 03:25 AM, Stas Oskin wrote: > Hi. > > We got a very strange that happened on our test cluster. > > After a power crash, the mfsmaster syslog is full of following errors: > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005313A2_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005393A2_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005493A3_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005293A3_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005513A3_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005313A3_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005113A3_00000001), so create it for future deletion > Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent > chunk (00000000005093A3_00000001), so create it for future deletion > > These errors appears all the time, and practically hang the mfsmaster. > mfscgi stops working (hangs), and mounts are aborting with following > error: > error receiving data from mfsmaster: ETIMEDOUT (Operation timed out) > error receiving data from mfsmaster: ETIMEDOUT (Operation timed out) > > > Upgrading to .20 didn't help. > > Any idea what this could be and how to resolve it? > > Thanks. > > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-02-28 11:06:05
|
Hi! sessions*.mfs files remember the clients' session parameters (like clients root directory, is read-write or read-only, uids and gids mappings, etc.). When you lose such a file you should remount the client (before 1.6.19) and after 1.6.19 session file would be recreated automatically. So you do not need to copy these files when restoring, they would be recreated during mount. Kind regards Michal From: kuer ku [mailto:ku...@gm...] Sent: Tuesday, February 22, 2011 9:18 AM To: moo...@li... Subject: [Moosefs-users] sessions.mfs and mfs restore on failover Hi, all, There is a sessions_ml.mfs on metalogger backuping metadata. But sessions.mfs is not mentioned in http://www.moosefs.org/news-reader/items/metadata-ins-and-outs.html. What is put in sessions.mfs ? and do I need copy sessions_ml.mfs to sessions.mfs when restore metadata from metalogger data ?? Thanks -- kuerant |
From: Michal B. <mic...@ge...> - 2011-02-28 11:00:58
|
Hi! We saw in the CGI monitor that you had 34 missing chunks lately. When you had the master crash it could happen that files which were being saved during master failure got corrupted. If the files are of little importance (temp files, cache, logs, etc) you should just delete them (setting trash time to 0). When a file was important you may try to run "mfsfilerepair" on it. If chunk of this file exists and only have the wrong version (because of unfinished save), the data of the file should appear. If not, the missing chunks would be just filled with 0s. Hope it helps Regards Michal -----Original Message----- From: Samuel Hassine, Olympe Network [mailto:sam...@ol...] Sent: Thursday, February 24, 2011 10:16 AM To: moo...@li... Subject: Re: [Moosefs-users] Question about goal Hi, The MFS Master crashes but all servers are now up but I still have unavailable chunks: 21467 unavailable trash files: 241 unavailable files: 287 The address: http://on-001.olympe-network.com:9425 Thanks for your help. Regards. Samuel Hassine Le 24/02/2011 09:16, Laurent Wandrebeck a écrit : > On Thu, 24 Feb 2011 08:56:54 +0100 > "Samuel Hassine, Olympe Network"<sam...@ol...> > wrote: > >> Hi, >> >> Thank you for your answer. Another question, how can I deal with : >> >> unavailable chunks: 21467 >> unavailable trash files: 241 >> unavailable files: 287 >> >> ? > You must have lost a server or a couple disks, IMHO. > Check that first. > Regards, > > > > ---------------------------------------------------------------------------- -- > Free Software Download: Index, Search& Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ---------------------------------------------------------------------------- -- Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-02-28 10:39:37
|
Hi! For the moment the configuration you describe won't work. If chunkservers would connect to the master server using addresses in network A, so all clients would also get addresses from network A (independently if they are in network A or B). And similarly, if addresses from network B were used, clients would get only addresses from network B. We do not come up with any workaround for this. Unfortunately we don't think this functionality will appear soon on our roadmap neither. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: youngcow [mailto:you...@gm...] Sent: Thursday, February 24, 2011 5:14 PM To: moosefs-users Subject: [Moosefs-users] How to use moosefs in two networks Hi I have build a moosefs system.Now I have a new demand. I have two networks(network A and network B). Network A and network B can't communicate each other directly. There are some moosefs clients in network A and network B. So I will install two Nics(connect to network A and network B) in each trunks server and master server. But I don't know this solution can work correctly? (because I don't know master server can return correct trunk server's ip address to client?) Thanks. ---------------------------------------------------------------------------- -- Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-02-28 07:18:29
|
Hi! How does your metadata file look like? What do you have inside "/usr/local/var/mfs" (PREFIX/var/mfs) directory? Regards Michał From: Stas Oskin [mailto:sta...@gm...] Sent: Sunday, February 27, 2011 8:25 PM To: moosefs-users Subject: [Moosefs-users] Non-existing chunks hang the mfsmaster Hi. We got a very strange that happened on our test cluster. After a power crash, the mfsmaster syslog is full of following errors: Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005313A2_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005393A2_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005493A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005293A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005513A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005313A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005113A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005093A3_00000001), so create it for future deletion These errors appears all the time, and practically hang the mfsmaster. mfscgi stops working (hangs), and mounts are aborting with following error: error receiving data from mfsmaster: ETIMEDOUT (Operation timed out) error receiving data from mfsmaster: ETIMEDOUT (Operation timed out) Upgrading to .20 didn't help. Any idea what this could be and how to resolve it? Thanks. |
From: Stas O. <sta...@gm...> - 2011-02-27 19:25:44
|
Hi. We got a very strange that happened on our test cluster. After a power crash, the mfsmaster syslog is full of following errors: Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005313A2_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005393A2_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005493A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005293A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005513A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005313A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005113A3_00000001), so create it for future deletion Feb 27 19:19:21 web1 mfsmaster[30654]: chunkserver has nonexistent chunk (00000000005093A3_00000001), so create it for future deletion These errors appears all the time, and practically hang the mfsmaster. mfscgi stops working (hangs), and mounts are aborting with following error: error receiving data from mfsmaster: ETIMEDOUT (Operation timed out) error receiving data from mfsmaster: ETIMEDOUT (Operation timed out) Upgrading to .20 didn't help. Any idea what this could be and how to resolve it? Thanks. |
From: Flow J. <fl...@gm...> - 2011-02-26 17:20:27
|
Hi, I now know how to re-create the issue and it should also be related to AutoFS. I noticed that if I make a new file with non-zero size and delete it immediately, it will be set as reserved file and keep at this state for a while (about 1 minute in our environment). If the underlying MooseFS is unmounted within this short period, the behavior differs between fstab mounted FS and AufoFS mounted FS. * If the FS is mounted by fstab, create and delete a file, then unmount the FS, the file eventually gets deleted * If the FS is mounted by autofs, create and delete a file, then unmount the FS, the file remains as reserved forever And unfortunately we are relying on the timeout feature of AutoFS so unmount happens frequently. This is the reason we have so many cache files (created then deleted during one mount session) remaining as reserved files forever. Our AutoFS mount option is mentioned in the other thread (http://sourceforge.net/mailarchive/forum.php?thread_name=4D68E18F.3000708%40gmail.com&forum_name=moosefs-users). And the issue looks like the same: * If the FS is mounted by fstab, writing a file (> 1M) to some folder then unmount it, works O.K. * If the FS is mounted by autofs, writing a file (> 1M) to some folder then unmount it, core file gets dumped Increasing autofs timeout would help for the reserved file issue but not for the core dump issue. And even if the timeout is increased, a restart / power off of the system still won't leave enough time for the file to be totally deleted. I do have a work around to fix the 2 issues temporarily, by mounting the entire MFS system with fstab and use "-fstype=bind" option in autofs to make auto mounting/unmounting happen. But this is complicated and different mfs mount options can't be set with different subfolders. So I do hope I can have native AutoFS supported MooseFS mounts. Can any one help on this? Many Thanks Flow On 02/22/2011 12:03 AM, Flow Jiang wrote: > Hi, > > Just found another issue. I cleared about 10000 reserved files with > the script provided at > http://sourceforge.net/tracker/?func=detail&aid=3104619&group_id=228631&atid=1075722 > yesterday, and this morning I had 0 reserved file when started working. > > However, after one day development activity with 6 workstations, now > we have 204 reserved files not deleted. I've noticed it's stated that > "Each session after two hours is automatically closed and all the > files are released." in above link, but seems it's not happening in > our environment. We have CentOS 5.5 x86 servers and run mfsmount on > Fedora 12 x64 workstations. Both servers and workstations run mfs > 1.6.19. And mfs is serving as home with read / write access. > > Here are some example of the reserved files by reading the metadata: > > 00067856|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|index.sqlite-journal > > 00067857|UserHome|tompan|.mozilla|firefox|mk9e32d7.default|OfflineCache|index.sqlite-journal > > > Most of the 204 reserved files look like temp / journal files. > > Any ideas about the cause of the issue? > > BTW, OpenOffice fails to start if MFS serves as home directory. It > should be a FS bug as stated on: > http://qa.openoffice.org/issues/show_bug.cgi?id=113207. Would it be > related to the issue above? And can we fix this OOo issue? > > Many Thanks > Flow > > > > > > > |
From: Heiko S. <sch...@iu...> - 2011-02-26 13:27:52
|
On Fri, 25 Feb 2011 11:21:33 +0100 "Michal Borychowski" <mic...@ge...> wrote: > Hi! > > If you set goal=1, all chunks belonging to the same file will go to the same > chunk server. Supposed to, but it does not work this way. In our test setup (one master, one client, two chunkservers) the chunks are beeing spread across the two chunkservers, even with "goal=1". df on on the chunksrevr clearly shows that the files are spread evenly across them. mfsfileinfo shows that the chunks are placed alternatingly on the chunk servers. Incrementing "goal" only increases the number of copies. Screenshots can be supplied on tuesday since our server room is under reconstruction over the weekend. Heiko |
From: Flow J. <fl...@gm...> - 2011-02-26 11:18:57
|
I found a more easy way to re-create the issue. Just keep the auto.home as: * -fstype=fuse,mfssubfolder=UserHome/& :mfsmount Then copy a file to an unmounted user folder, and run: service autofs stop The core file will be dumped. Flow On 02/26/2011 04:44 PM, Flow Jiang wrote: > Hi, > > After merging moosefs into our production environment for about 2 > weeks, I now found there are a lot of core files dumped from mfsmount > and remaining in "/" directory. All the back traces look simulator, > which dies at free(freecblockshead) call in write_data_term (), when > mainloop() ends. > > Fedora 12 x64 (mfsmount is compiled from source): > > Core was generated by `mfsmount /home/fwjiang -o > rw,mfssubfolder=UserHome/fwjiang'. > Program terminated with signal 6, Aborted. > #0 0x00000039ab4327f5 in raise () from /lib64/libc.so.6 > Missing separate debuginfos, use: debuginfo-install > fuse-libs-2.8.5-2.fc12.x86_64 glibc-2.11.2-3.x86_64 > libgcc-4.4.4-10.fc12.x86_64 > (gdb) bt > #0 0x00000039ab4327f5 in raise () from /lib64/libc.so.6 > #1 0x00000039ab433fd5 in abort () from /lib64/libc.so.6 > #2 0x00000039ab46fa1b in __libc_message () from /lib64/libc.so.6 > #3 0x00000039ab475336 in malloc_printerr () from /lib64/libc.so.6 > #4 0x000000000040eb12 in write_data_term () at writedata.c:906 > #5 0x000000000041282d in mainloop (args=0x7fff49f484d0, mp=0x1bb72e0 > "/tmp/autotAfzu1", mt=1, fg=0) at main.c:599 > #6 0x0000000000412c48 in main (argc=<value optimized out>, > argv=0x7fff49f485f8) at main.c:819 > > Centos 5.5 x86 (mfsmount is from DAG repository): > > Core was generated by `mfsmount /project/ui -o > rw,mfssubfolder=ProjectData/project/ui'. > Program terminated with signal 6, Aborted. > #0 0x00417410 in __kernel_vsyscall () > (gdb) bt > #0 0x00417410 in __kernel_vsyscall () > #1 0x00a8ddf0 in raise () from /lib/libc.so.6 > #2 0x00a8f701 in abort () from /lib/libc.so.6 > #3 0x00ac628b in __libc_message () from /lib/libc.so.6 > #4 0x00ace5a5 in _int_free () from /lib/libc.so.6 > #5 0x00ace9e9 in free () from /lib/libc.so.6 > #6 0x08056cc3 in write_data_term () > #7 0x0805a768 in mainloop () > #8 0x0805ab37 in main () > > The auto.home file to auto mount user home on Fedore 12 boxes look like: > > * -fstype=fuse,mfssubfolder=UserHome/& :mfsmount > > All the server / clients run mfs 1.6.19. And all core files are dumped > from those mounts with Read/Write access. By reading time log of the > core dump listed above, I found it's dumped at when autofs timeouts > (the default timeout is 300s on CentOS 5.5). > > So I tried manually copy a file (about 80MB) to a user folder which > haven't been auto mounted, then wait 300s until the folder is auto > unmounted, the core was dumped as expected. > > Does anyone has the same issue? Am I doing the right thing to auto > mount with Moosefs? > > Thanks > Flow |
From: Flow J. <fl...@gm...> - 2011-02-26 08:44:57
|
Hi, After merging moosefs into our production environment for about 2 weeks, I now found there are a lot of core files dumped from mfsmount and remaining in "/" directory. All the back traces look simulator, which dies at free(freecblockshead) call in write_data_term (), when mainloop() ends. Fedora 12 x64 (mfsmount is compiled from source): Core was generated by `mfsmount /home/fwjiang -o rw,mfssubfolder=UserHome/fwjiang'. Program terminated with signal 6, Aborted. #0 0x00000039ab4327f5 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install fuse-libs-2.8.5-2.fc12.x86_64 glibc-2.11.2-3.x86_64 libgcc-4.4.4-10.fc12.x86_64 (gdb) bt #0 0x00000039ab4327f5 in raise () from /lib64/libc.so.6 #1 0x00000039ab433fd5 in abort () from /lib64/libc.so.6 #2 0x00000039ab46fa1b in __libc_message () from /lib64/libc.so.6 #3 0x00000039ab475336 in malloc_printerr () from /lib64/libc.so.6 #4 0x000000000040eb12 in write_data_term () at writedata.c:906 #5 0x000000000041282d in mainloop (args=0x7fff49f484d0, mp=0x1bb72e0 "/tmp/autotAfzu1", mt=1, fg=0) at main.c:599 #6 0x0000000000412c48 in main (argc=<value optimized out>, argv=0x7fff49f485f8) at main.c:819 Centos 5.5 x86 (mfsmount is from DAG repository): Core was generated by `mfsmount /project/ui -o rw,mfssubfolder=ProjectData/project/ui'. Program terminated with signal 6, Aborted. #0 0x00417410 in __kernel_vsyscall () (gdb) bt #0 0x00417410 in __kernel_vsyscall () #1 0x00a8ddf0 in raise () from /lib/libc.so.6 #2 0x00a8f701 in abort () from /lib/libc.so.6 #3 0x00ac628b in __libc_message () from /lib/libc.so.6 #4 0x00ace5a5 in _int_free () from /lib/libc.so.6 #5 0x00ace9e9 in free () from /lib/libc.so.6 #6 0x08056cc3 in write_data_term () #7 0x0805a768 in mainloop () #8 0x0805ab37 in main () The auto.home file to auto mount user home on Fedore 12 boxes look like: * -fstype=fuse,mfssubfolder=UserHome/& :mfsmount All the server / clients run mfs 1.6.19. And all core files are dumped from those mounts with Read/Write access. By reading time log of the core dump listed above, I found it's dumped at when autofs timeouts (the default timeout is 300s on CentOS 5.5). So I tried manually copy a file (about 80MB) to a user folder which haven't been auto mounted, then wait 300s until the folder is auto unmounted, the core was dumped as expected. Does anyone has the same issue? Am I doing the right thing to auto mount with Moosefs? Thanks Flow |
From: Steve W. <st...@pu...> - 2011-02-25 16:24:59
|
On 02/25/2011 10:21 AM, Giovanni Toraldo wrote: > Il 25/02/2011 16:09, Steve Wilson ha scritto: >> Thanks... this is very encouraging! We are also using Ubuntu 10.04 and >> would like to start migrating users from local home directories to a >> single MooseFS volume. But we need to get OpenOffice behaving well with >> MooseFS before we can continue. > Nice, so we are starting a migration in the next few days. > > We was already using GlusterFS for /home, but I had a lot of sporadic > problems with file truncation and locking, ex: Firefox don't remove > .lock, Thunderbird very slow when downloading new emails, Gnome > gvfsd-metadata hangs on 100% cpu and corruption of desktop icons position. > >> May I ask some details of the options used to export MooseFS from the >> server and the options used for mounting on the client? And you are >> using the Ubuntu OpenOffice package (version 3.2)? Which version of >> MooseFS are you using? > I've followed the few steps described on the main howto, no special > options are configured: > > Moosefs 1.6.20, 2 mfs-chunkserver on Debian Lenny x86_64 (exports as > ext4 with relatime option), mfs-master on Debian Squeeze x86_64, clients > are ubuntu 10.04 x86. > > Bye. > I'm glad to hear that it's working for you but confused that a very similar configuration isn't working for us. BTW, this bug in OpenOffice appears to be fixed in version 3.3: http://openoffice.org/bugzilla/show_bug.cgi?id=105828 Steve |
From: Giovanni T. <gt...@li...> - 2011-02-25 15:21:53
|
Il 25/02/2011 16:09, Steve Wilson ha scritto: > Thanks... this is very encouraging! We are also using Ubuntu 10.04 and > would like to start migrating users from local home directories to a > single MooseFS volume. But we need to get OpenOffice behaving well with > MooseFS before we can continue. Nice, so we are starting a migration in the next few days. We was already using GlusterFS for /home, but I had a lot of sporadic problems with file truncation and locking, ex: Firefox don't remove .lock, Thunderbird very slow when downloading new emails, Gnome gvfsd-metadata hangs on 100% cpu and corruption of desktop icons position. > May I ask some details of the options used to export MooseFS from the > server and the options used for mounting on the client? And you are > using the Ubuntu OpenOffice package (version 3.2)? Which version of > MooseFS are you using? I've followed the few steps described on the main howto, no special options are configured: Moosefs 1.6.20, 2 mfs-chunkserver on Debian Lenny x86_64 (exports as ext4 with relatime option), mfs-master on Debian Squeeze x86_64, clients are ubuntu 10.04 x86. Bye. -- Giovanni Toraldo http://www.libersoft.it/ |
From: Steve W. <st...@pu...> - 2011-02-25 15:09:50
|
On 02/25/2011 09:07 AM, Giovanni Toraldo wrote: > Hi Steven, > > Il 25/02/2011 02:08, Steven M Wilson ha scritto: >> I'm unable to get OpenOffice (version 3.2) to run as a user with a home directory in a MooseFS (version 1.6.20) mounted volume. It will display the splash screen and then exit. OpenOffice works fine for the same user with a home directory on the local disk. I suspect this has something to do with file locking or some unorthodox file handling tricks by OpenOffice. Is this a known problem? And, if so, is there a workaround? > we are successfully running /home on MooseFS, without any issues for > openoffice on ubuntu 10.04. > > Bye. > Thanks... this is very encouraging! We are also using Ubuntu 10.04 and would like to start migrating users from local home directories to a single MooseFS volume. But we need to get OpenOffice behaving well with MooseFS before we can continue. May I ask some details of the options used to export MooseFS from the server and the options used for mounting on the client? And you are using the Ubuntu OpenOffice package (version 3.2)? Which version of MooseFS are you using? It looks like I'm having the same problem as reported in this OpenOffice bug: http://openoffice.org/bugzilla/show_bug.cgi?id=113207&historysort=new <http://openoffice.org/bugzilla/show_bug.cgi?id=113207&historysort=new> strace is showing these related lines (OpenOffice appears to be truncating a file that it has just unlinked!): [pid 22207] open("/net/post/stevew/.execoooh0c2YJ", O_RDWR|O_CREAT|O_EXCL, 0600) = 25 [pid 22207] unlink("/net/post/stevew/.execoooh0c2YJ") = 0 [pid 22207] ftruncate(25, 4096) = -1 EPERM (Operation not permitted) [pid 22207] mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_SHARED, 25, 0) = 0xb3faf000 [pid 22207] mmap2(NULL, 4096, PROT_READ|PROT_EXEC, MAP_SHARED, 25, 0) = 0x459000 [pid 22207] --- SIGBUS (Bus error) @ 0 (0) --- Thanks and best regards, Steve |
From: Giovanni T. <gt...@li...> - 2011-02-25 14:40:53
|
Hi, I was wondering about the ability to configure a priority flag for each available chunkserver: on my setup I have two servers, a master+chunk and a metalogger+chunk, and a bunch of desktop client (now ~50, but they will probably increase in the future). Each client has a spare disk of 40/80/160 GB, that are mostly unused. What if I use each desktop client as a small chunk server? The problem is that even with an high number of goal, maybe some chunks will be placed on nodes that are currently offline on a particular moment of the day (mostly part-time worker here). If I was able to configure as high priority the 2 big chunkserver and as low priority the small chunkservers, I would be pretty sure that at least the first two copies of a chunk are always on the servers. Any comment is greatly appreciated. Thanks. -- Giovanni Toraldo http://www.libersoft.it/ |
From: Giovanni T. <gt...@li...> - 2011-02-25 14:07:19
|
Hi Steven, Il 25/02/2011 02:08, Steven M Wilson ha scritto: > I'm unable to get OpenOffice (version 3.2) to run as a user with a home directory in a MooseFS (version 1.6.20) mounted volume. It will display the splash screen and then exit. OpenOffice works fine for the same user with a home directory on the local disk. I suspect this has something to do with file locking or some unorthodox file handling tricks by OpenOffice. Is this a known problem? And, if so, is there a workaround? we are successfully running /home on MooseFS, without any issues for openoffice on ubuntu 10.04. Bye. -- Giovanni Toraldo http://www.libersoft.it/ |
From: Michal B. <mic...@ge...> - 2011-02-25 10:35:32
|
Hi Steven! Unfortunately we've never run OpenOffice on MooseFS... Though MooseFS doesn't support "global" file locking, it supports locking "per client" so this shouldn't be the issue (http://www.moosefs.org/moosefs-faq.html#file-locking). Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Steven M Wilson [mailto:st...@pu...] Sent: Friday, February 25, 2011 2:08 AM To: moo...@li... Subject: [Moosefs-users] OpenOffice and MooseFS Hi, I'm unable to get OpenOffice (version 3.2) to run as a user with a home directory in a MooseFS (version 1.6.20) mounted volume. It will display the splash screen and then exit. OpenOffice works fine for the same user with a home directory on the local disk. I suspect this has something to do with file locking or some unorthodox file handling tricks by OpenOffice. Is this a known problem? And, if so, is there a workaround? Thanks! Steve ---------------------------------------------------------------------------- -- Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Laurent W. <lw...@hy...> - 2011-02-25 10:27:18
|
On Fri, 25 Feb 2011 11:21:33 +0100 "Michal Borychowski" <mic...@ge...> wrote: > Hi! > > If you set goal=1, all chunks belonging to the same file will go to the same > chunk server. That one is good for a FAQ entry :) -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michal B. <mic...@ge...> - 2011-02-25 10:21:52
|
Hi! If you set goal=1, all chunks belonging to the same file will go to the same chunk server. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Heiko Schröter [mailto:sch...@iu...] Sent: Thursday, February 24, 2011 10:09 AM To: moo...@li... Subject: Re: [Moosefs-users] Chunks Am Donnerstag 24 Februar 2011, um 09:54:14 schrieb Laurent Wandrebeck: > On Thu, 24 Feb 2011 09:24:11 +0100 > Heiko Schröter <sch...@iu...> wrote: > > > Hello, > Hi, > > > > we are currently investigating moosefs as a successor of our 200TB storage cfs. > > mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 > > Everything is working fine. > > > > We have a question about the way mfs handles chunks. > > Is it possible to keep the chunks on a single chunkserver, instead of "load balance" them to all chunkservers ? > Not that I know of. mfs is designed for reliability, and working with > goal=1 is as bad as raid 0 when it comes to it. Thks for the reply but that was not my intention to ask. I try to keep the "pieces" (chunks) of a file on a single chunkserver. And yes, i'am quite clear about the risks when setting the goal=1. But this will affect the number of copies of the whole file, as far as i understand it. This what we have now with our lustre system anyway ;-) Our chunkserver (Raids) are generally running a hardware raid6 with 5 to 8TB per partition. I wouldn't mind if all the chunks are spread across theses partitions, as long as they stay on a single chunkserver. So, would it be possible to gather all chunks of a single file on a single chunkserver ? Regards Heiko ---------------------------------------------------------------------------- -- Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steven M W. <st...@pu...> - 2011-02-25 01:08:07
|
Hi, I'm unable to get OpenOffice (version 3.2) to run as a user with a home directory in a MooseFS (version 1.6.20) mounted volume. It will display the splash screen and then exit. OpenOffice works fine for the same user with a home directory on the local disk. I suspect this has something to do with file locking or some unorthodox file handling tricks by OpenOffice. Is this a known problem? And, if so, is there a workaround? Thanks! Steve |