You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Piotr R. K. <pio...@mo...> - 2014-08-26 15:00:08
|
Dear all, about 3 weeks ago Joe sent to this list an information about the issue on FreeBSD *10.0* (in quotation) We looked into it and found that the problem lies in new implementation of FUSE (a race) in FreeBSD 10.0. The problem occurs on machines with 2 or more cores/CPUs. We are going to report the bug to FreeBSD. We've also created a *workaround* which is implemented in 2.0.34-1 version of MooseFS (for beta tests only). You can install the newest version of MooseFS (2.0.34-1) which is working on FreeBSD 10.0 now from package repository: 1. Please create directory: /usr/local/etc/pkg/repos (# mkdir -p /usr/local/etc/pkg/repos) 2. Next please create file moosefs.conf in this directory 3. Please insert following contents into moosefs.conf file: moosefs: { url: "http://ppa.moosefs.com/freebsd/10:x86:64", enabled: yes, mirror_type: NONE } 4. The last thing to do is # pkg update and # pkg install moosefs-ce-XXXX or # pkg install moosefs-pro-XXXX XXXX is MooseFS' module name (e.g. master, chunkserver, mount, ...) It is also possible to add a repo for x86 versions of FreeBSD: you need to change "10:x86:64" to "10:x86:32" in "url:". -- Best regards, Piotr Robert Konopelko *MooseFS Technical Support Engineer* pio...@mo...[1] | moosefs.com[2] On Friday, August 01, 2014 4:15:55 PM Joseph Love <jo...@ge...> wrote: > Hi, > > I’ve been trying to get a feel for MooseFS’s performance, especially after learning that MooseFS is available in the FreeBSD ports tree. It seems, however, that the MooseFS fuse-client is not stable on FreeBSD 10.0. > > I installed the chunk server on 3 nodes, a master on a 4th. I then installed the fuse-client on a separate FreeBSD system, and did some tests with ‘dd’ (e.g., dd if=/dev/zero of=test.file). Performance was so-so, but dd isn’t always a really great experiment. > > Eventually, I decided I needed something designed to test disk IO performance, and ran “iozone -a”. It ran for a for a little bit, then threw an unexpected error at me: > > > Can not open temp file: iozone.tmp > > open: Device not configured > > OR (in the case of a previous attempt): > > > Error writing block 0, fd= 3 > > write: Input/output error > > iozone: interrupted > > exiting iozone > > In both cases, the mounted filesystem was no longer usable. Even running “ls” from the mount would give this error: > > ls: .: Device not configured > > Then I tried the MooseFS fuse client on mac. It not only ran successfully, but for the tests that I could compare, it gave output figures substantially higher (some which suggest some caching, compression, or throughput calculation issue, as they exceeded the gigabit network link). > > Are there known issues with MooseFS on FreeBSD 10, specifically the fuse-client? > > If anyone has suggestions or experience, I’d love to hear it. > > Thanks, > -Joe > > > ------------------------------------------------------------------------------ > Want fast and easy access to all the code in your enterprise? Index and > search up to 200,000 lines of code with a free copy of Black Duck > Code Sight - the same software that powers the world's largest code > search on Ohloh, the Black Duck Open Hub! Try it now. > http://p.sf.net/sfu/bds > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -------- [1] mailto:pio...@mo... [2] http://moosefs.com |
From: Neddy, N. N. <na...@nd...> - 2014-08-26 04:39:14
|
Hi, Okay, it's my fault, the proxy was caching aggressively. I can update new version (2.0.34-1) now. Thanks On Mon, Aug 25, 2014 at 1:52 PM, Aleksander Wieliczko < ale...@co...> wrote: > Hi. > I would like to inform that the latest MooseFS release is 2.0.34-1. > Version 2.0.33-1 was available only for two days, and after creating > workaround for bug in FreeBSD Fuse version we published new release. > > So please do apt-get update and then apt-get upgrade to fix this problem > > Best regards > Aleksander Wieliczko > Technical Support Engineer > http://moosefs.com | ale...@mo... > > On 08/25/2014 04:06 AM, Neddy, NH. Nam wrote: > > Hi everybody, I'm new here. > > I'm using Moosefs-ce-client 2.0.29 on Ubuntu 14.04 (Trusty) and when > running aptitude upgrade it shows that newer package can't be found. > > Err http://ppa.moosefs.com/apt/ubuntu/trusty/ trusty/main > moosefs-ce-client amd64 2.0.33-1 > 404 Not Found > E: Failed to fetch > http://ppa.moosefs.com/apt/ubuntu/trusty/pool/main/m/moosefs-ce/moosefs-ce-client_2.0.33-1_amd64.deb > 404 Not Found > > E: Unable to fetch some archives, maybe run apt-get update or try with > --fix-missing? > > > Is this a mystery update? > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters.http://tv.slashdot.org/ > > > > _______________________________________________ > moosefs-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Aleksander W. <ale...@co...> - 2014-08-25 07:31:50
|
Hi. I would like to inform that the latest MooseFS release is 2.0.34-1. Version 2.0.33-1 was available only for two days, and after creating workaround for bug in FreeBSD Fuse version we published new release. So please do apt-get update and then apt-get upgrade to fix this problem Best regards Aleksander Wieliczko Technical Support Engineer http://moosefs.com | ale...@mo... On 08/25/2014 04:06 AM, Neddy, NH. Nam wrote: > Hi everybody, I'm new here. > > I'm using Moosefs-ce-client 2.0.29 on Ubuntu 14.04 (Trusty) and when > running aptitude upgrade it shows that newer package can't be found. > > Err http://ppa.moosefs.com/apt/ubuntu/trusty/ trusty/main > moosefs-ce-client amd64 2.0.33-1 > 404 Not Found > E: Failed to fetch > http://ppa.moosefs.com/apt/ubuntu/trusty/pool/main/m/moosefs-ce/moosefs-ce-client_2.0.33-1_amd64.deb > 404 Not Found > > E: Unable to fetch some archives, maybe run apt-get update or try with > --fix-missing? > > > Is this a mystery update? > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Neddy, N. N. <na...@nd...> - 2014-08-25 03:05:11
|
Hi everybody, I'm new here. I'm using Moosefs-ce-client 2.0.29 on Ubuntu 14.04 (Trusty) and when running aptitude upgrade it shows that newer package can't be found. Err http://ppa.moosefs.com/apt/ubuntu/trusty/ trusty/main moosefs-ce-client amd64 2.0.33-1 404 Not Found E: Failed to fetch http://ppa.moosefs.com/apt/ubuntu/trusty/pool/main/m/moosefs-ce/moosefs-ce-client_2.0.33-1_amd64.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Is this a mystery update? |
From: Boli <bo...@le...> - 2014-08-11 12:26:43
|
i was unaware of mfs4win , but i cannot find a download link on either the english or polish pages... which is disappointing... On 11/08/2014 14:17, 김지호 wrote: > Hi. > > > so. we’ll test moosefs, but window client is not open. > > http://prog.olsztyn.pl/mfs4win/en/ > > i can’t download url .. ;; > |
From: 김지호 <jh...@sc...> - 2014-08-11 11:39:02
|
Hi. I have some question. we considered windows files system. ex) hdfs / dfs / glusterfs … but these file system not working on windows.. we read The master server, metalogger server and chunkservers can also be run on Solaris or Windows with Cygwin. Unfortunately without FUSE it won't be possible to mount the filesystem within these operating systems. so. we’ll test moosefs, but window client is not open. http://prog.olsztyn.pl/mfs4win/en/ i can’t download url .. ;; we develope window fuse & dokan ? does not easily mount moosefs on windows ? MooseServer OS : Windows 2012 R2 CPU Quad Core / Disk 30TB / MEM 32GB we appreciate your moosefs..!! from south-korea!! |
From: Davies L. <dav...@gm...> - 2014-08-07 18:38:58
|
It seems that you had lost parts of your metadata, in details: 1. have all the files and directory objects (and relations to chunks) 2. have part of the names of files and directories(whose can not be recovered) 3. lost all free inodes (these are not important, can be recovered later) 4. lost all the information about chunks (these can be recovered) In order to recover the metadata as much as possible, here are some steps: 0. backup all the metadata and changelogs files. 1. scan all the disks on mfschunkserver to get the information about chunks, including chunk id and version. They are like this: /disk/{XX}/chunk_{XXXXXXXXXXXXXXXX}_{XXXXXXXX}.mfs In the filename, the first part is id of chunk in hex, the second part is version (in hex). You put these together, and write them into a file (the format should be {ID:64}{VERSION:32}{0:32} // the last one is locktime, could be 0 see chunk.c/ chunk_load() or chunk_dump() 2. scan all the changelogs, filter out CREATE and LINK and MOVE and SYMLINK, turn them into edge with names, dump them into file. the format could be find in mfsmaster/fiesystem.c load_edge/dump_edge 3. modify mfsmetarestore (mfsmaster/filesystem.c), a) if it failed to load edge, then continue to load the edge file you have dumped. It should handle all kind of error cases. b) after loading the edges, modify the chunk_load() to load the file you had dumped. c) create several recover directories (such as recovered/XXX/) d) go through all the files and directories, if they have no parent, then put them into the recovered directories. (which directory to put, could base on range of id of file) e) dump them as a correct metadata. 4. run mfsmetarestore with the new recovered metadata, hope that we can recover some metadata from changelogs. (fix the cases while appling changelogs) then we could get a more completed metadata. 5. start mfsmaster (remove all the changes) PS: in order to re-generate free inodes, you could go through 1 to 2^32, find all the unused inodes. cc to maillist, hope this can help other people. Davies On Thu, Aug 7, 2014 at 10:19 AM, Jakubiec, Jakub (NSN - PL/Wroclaw) <jak...@ns...> wrote: > Hi Davies, > > Thank you very much for your interest:) > > Master: > /var/lib/mfs# ls -latr > total 15737756 > drwxr-xr-x 2 moosefs moosefs 4096 Apr 13 2013 metalogger > -rw-r----- 1 moosefs moosefs 17090754 Apr 13 2013 changelog_back.0.mfs > -rw-r----- 1 moosefs moosefs 90621422 Apr 13 2013 changelog_back.1.mfs > drwx------ 2 moosefs moosefs 6 Apr 13 2013 lost+found > -rw-r----- 1 moosefs moosefs 314203552 Aug 5 10:00 changelog.50.mfs > -rw-r----- 1 moosefs moosefs 303673157 Aug 5 11:00 changelog.49.mfs > -rw-r----- 1 moosefs moosefs 303214439 Aug 5 12:00 changelog.48.mfs > -rw-r----- 1 moosefs moosefs 308574747 Aug 5 13:00 changelog.47.mfs > -rw-r----- 1 moosefs moosefs 307029190 Aug 5 14:00 changelog.46.mfs > -rw-r----- 1 moosefs moosefs 303685261 Aug 5 15:00 changelog.45.mfs > -rw-r----- 1 moosefs moosefs 312222446 Aug 5 16:00 changelog.44.mfs > -rw-r----- 1 moosefs moosefs 336560639 Aug 5 17:00 changelog.43.mfs > -rw-r----- 1 moosefs moosefs 304397283 Aug 5 18:00 changelog.42.mfs > -rw-r----- 1 moosefs moosefs 301038165 Aug 5 19:00 changelog.41.mfs > -rw-r----- 1 moosefs moosefs 302219981 Aug 5 20:00 changelog.40.mfs > -rw-r----- 1 moosefs moosefs 295888952 Aug 5 21:00 changelog.39.mfs > -rw-r----- 1 moosefs moosefs 296489807 Aug 5 22:00 changelog.38.mfs > -rw-r----- 1 moosefs moosefs 293353462 Aug 5 23:00 changelog.37.mfs > -rw-r----- 1 moosefs moosefs 299229297 Aug 6 00:00 changelog.36.mfs > -rw-r----- 1 moosefs moosefs 296994199 Aug 6 01:00 changelog.35.mfs > -rw-r----- 1 moosefs moosefs 295315907 Aug 6 02:00 changelog.34.mfs > -rw-r----- 1 moosefs moosefs 294806152 Aug 6 03:00 changelog.33.mfs > -rw-r----- 1 moosefs moosefs 296370589 Aug 6 04:00 changelog.32.mfs > -rw-r----- 1 moosefs moosefs 293806115 Aug 6 05:00 changelog.31.mfs > -rw-r----- 1 moosefs moosefs 295590838 Aug 6 06:00 changelog.30.mfs > -rw-r----- 1 moosefs moosefs 297699978 Aug 6 07:00 changelog.29.mfs > -rw-r----- 1 moosefs moosefs 303900353 Aug 6 08:00 changelog.28.mfs > -rw-r----- 1 moosefs moosefs 293789125 Aug 6 08:59 changelog.27.mfs > -rw-r----- 1 moosefs moosefs 295412293 Aug 6 09:59 changelog.26.mfs > -rw-r----- 1 moosefs moosefs 307498017 Aug 6 11:00 changelog.25.mfs > -rw-r----- 1 moosefs moosefs 302684427 Aug 6 11:59 changelog.24.mfs > -rw-r----- 1 moosefs moosefs 309025916 Aug 6 12:59 changelog.23.mfs > -rw-r----- 1 moosefs moosefs 306549856 Aug 6 13:59 changelog.22.mfs > -rw-r----- 1 moosefs moosefs 303397422 Aug 6 14:59 changelog.21.mfs > -rw-r----- 1 moosefs moosefs 301542645 Aug 6 15:59 changelog.20.mfs > -rw-r----- 1 moosefs moosefs 299680578 Aug 6 16:59 changelog.19.mfs > -rw-r----- 1 moosefs moosefs 300461753 Aug 6 17:59 changelog.18.mfs > -rw-r----- 1 moosefs moosefs 297618384 Aug 6 18:59 changelog.17.mfs > -rw-r----- 1 moosefs moosefs 303336120 Aug 6 19:59 changelog.16.mfs > -rw-r----- 1 moosefs moosefs 299245908 Aug 6 21:00 changelog.15.mfs > -rw-r----- 1 moosefs moosefs 294583240 Aug 6 22:00 changelog.14.mfs > -rw-r----- 1 moosefs moosefs 296407449 Aug 6 23:00 changelog.13.mfs > -rw-r----- 1 moosefs moosefs 301720434 Aug 7 00:00 changelog.12.mfs > -rw-r----- 1 moosefs moosefs 290318501 Aug 7 01:00 changelog.11.mfs > -rw-r----- 1 moosefs moosefs 297943000 Aug 7 02:00 changelog.10.mfs > -rw-r----- 1 moosefs moosefs 296018347 Aug 7 03:00 changelog.9.mfs > -rw-r----- 1 moosefs moosefs 287716747 Aug 7 04:00 changelog.8.mfs > -rw-r----- 1 moosefs moosefs 292535667 Aug 7 05:00 changelog.7.mfs > -rw-r----- 1 moosefs moosefs 294573131 Aug 7 06:00 changelog.6.mfs > -rw-r----- 1 moosefs moosefs 286506085 Aug 7 07:00 changelog.5.mfs > -rw-r----- 1 moosefs moosefs 294287782 Aug 7 08:00 changelog.4.mfs > -rw-r----- 1 moosefs moosefs 297873007 Aug 7 09:00 changelog.3.mfs > -rw-r----- 1 moosefs moosefs 12355 Aug 7 09:59 sessions.mfs > -rw-r----- 1 moosefs moosefs 300092149 Aug 7 10:00 changelog.2.mfs > -rw-r----- 1 moosefs moosefs 78083828 Aug 7 10:32 changelog.1.mfs > -rw-r----- 1 moosefs moosefs 610016 Aug 7 10:32 stats.mfs > -rw-r----- 1 moosefs moosefs 1221636096 Aug 7 10:32 metadata.mfs > -rw-r----- 1 moosefs moosefs 0 Aug 7 14:13 .mfsmaster.lock > > > Metalogger 1: > /var/lib/mfs# ls -altr > total 16229288 > drwxr-xr-x 38 root root 4096 Sep 6 2012 .. > -rw-r----- 1 mfs mfs 296479738 Aug 5 09:35 changelog_ml.50.mfs > -rw-r----- 1 mfs mfs 316238409 Aug 5 10:35 changelog_ml.49.mfs > -rw-r----- 1 mfs mfs 303947274 Aug 5 11:35 changelog_ml.48.mfs > -rw-r----- 1 mfs mfs 305556923 Aug 5 12:35 changelog_ml.47.mfs > -rw-r----- 1 mfs mfs 308845050 Aug 5 13:35 changelog_ml.46.mfs > -rw-r----- 1 mfs mfs 307716566 Aug 5 14:35 changelog_ml.45.mfs > -rw-r----- 1 mfs mfs 303948503 Aug 5 15:35 changelog_ml.44.mfs > -rw-r----- 1 mfs mfs 313333359 Aug 5 16:35 changelog_ml.43.mfs > -rw-r----- 1 mfs mfs 337023497 Aug 5 17:35 changelog_ml.42.mfs > -rw-r----- 1 mfs mfs 305881616 Aug 5 18:35 changelog_ml.41.mfs > -rw-r----- 1 mfs mfs 301584318 Aug 5 19:35 changelog_ml.40.mfs > -rw-r----- 1 mfs mfs 303163502 Aug 5 20:35 changelog_ml.39.mfs > -rw-r----- 1 mfs mfs 297444355 Aug 5 21:35 changelog_ml.38.mfs > -rw-r----- 1 mfs mfs 297679855 Aug 5 22:35 changelog_ml.37.mfs > -rw-r----- 1 mfs mfs 295181131 Aug 5 23:35 changelog_ml.36.mfs > -rw-r----- 1 mfs mfs 300224180 Aug 6 00:35 changelog_ml.35.mfs > -rw-r----- 1 mfs mfs 299161812 Aug 6 01:35 changelog_ml.34.mfs > -rw-r----- 1 mfs mfs 296216601 Aug 6 02:35 changelog_ml.33.mfs > -rw-r----- 1 mfs mfs 296392724 Aug 6 03:35 changelog_ml.32.mfs > -rw-r----- 1 mfs mfs 297765821 Aug 6 04:35 changelog_ml.31.mfs > -rw-r----- 1 mfs mfs 295794408 Aug 6 05:35 changelog_ml.30.mfs > -rw-r----- 1 mfs mfs 296871066 Aug 6 06:35 changelog_ml.29.mfs > -rw-r----- 1 mfs mfs 299617403 Aug 6 07:35 changelog_ml.28.mfs > -rw-r----- 1 mfs mfs 304978643 Aug 6 08:35 changelog_ml.27.mfs > -rw-r----- 1 mfs mfs 295654889 Aug 6 09:35 changelog_ml.26.mfs > -rw-r----- 1 mfs mfs 296849548 Aug 6 10:35 changelog_ml.25.mfs > -rw-r----- 1 mfs mfs 309943330 Aug 6 11:35 changelog_ml.24.mfs > -rw-r----- 1 mfs mfs 304064350 Aug 6 12:35 changelog_ml.23.mfs > -rw-r----- 1 mfs mfs 311306306 Aug 6 13:35 changelog_ml.22.mfs > -rw-r----- 1 mfs mfs 308487345 Aug 6 14:35 changelog_ml.21.mfs > -rw-r----- 1 mfs mfs 305949946 Aug 6 15:35 changelog_ml.20.mfs > -rw-r----- 1 mfs mfs 303363812 Aug 6 16:35 changelog_ml.19.mfs > -rw-r----- 1 mfs mfs 301929661 Aug 6 17:35 changelog_ml.18.mfs > -rw-r----- 1 mfs mfs 301694623 Aug 6 18:35 changelog_ml.17.mfs > -rw-r----- 1 mfs mfs 299423179 Aug 6 19:35 changelog_ml.16.mfs > -rw-r----- 1 mfs mfs 304727895 Aug 6 20:35 changelog_ml.15.mfs > -rw-r----- 1 mfs mfs 301643234 Aug 6 21:35 changelog_ml.14.mfs > -rw-r----- 1 mfs mfs 296423282 Aug 6 22:35 changelog_ml.13.mfs > -rw-r----- 1 mfs mfs 298760945 Aug 6 23:35 changelog_ml.12.mfs > -rw-r----- 1 mfs mfs 303663253 Aug 7 00:35 changelog_ml.11.mfs > -rw-r----- 1 mfs mfs 293011718 Aug 7 01:35 changelog_ml.10.mfs > -rw-r----- 1 mfs mfs 1001828352 Aug 7 02:11 metadata_ml.mfs.back > -rw-r----- 1 mfs mfs 160492393 Aug 7 02:11 changelog_ml_back.0.mfs > -rw-r----- 1 mfs mfs 290318501 Aug 7 02:11 changelog_ml_back.1.mfs > -rw-r----- 1 mfs mfs 299823254 Aug 7 02:35 changelog_ml.9.mfs > -rw-r----- 1 mfs mfs 298815218 Aug 7 03:35 changelog_ml.8.mfs > -rw-r----- 1 mfs mfs 289845042 Aug 7 04:35 changelog_ml.7.mfs > -rw-r----- 1 mfs mfs 295657483 Aug 7 05:35 changelog_ml.6.mfs > -rw-r----- 1 mfs mfs 296467382 Aug 7 06:35 changelog_ml.5.mfs > -rw-r----- 1 mfs mfs 289313755 Aug 7 07:35 changelog_ml.4.mfs > -rw-r----- 1 mfs mfs 295862156 Aug 7 08:35 changelog_ml.3.mfs > -rw-r----- 1 mfs mfs 299891469 Aug 7 09:35 changelog_ml.2.mfs > -rw-r----- 1 mfs mfs 301908990 Aug 7 10:35 changelog_ml.1.mfs > -rw-r----- 1 mfs mfs 12355 Aug 7 11:07 sessions_ml.mfs > -rw-r----- 1 mfs mfs 80272457 Aug 7 11:07 changelog_ml.0.mfs > -rw-r----- 1 mfs mfs 0 Aug 7 17:23 .mfsmaster.lock > > Metalogger 2: > /mfs/metalogger# ls -altr > total 16255724 > -rw-r----- 1 moosefs moosefs 0 May 10 2012 .mfsmetalogger.lock > -rw-r----- 1 moosefs moosefs 296479738 Aug 5 10:16 changelog_ml.50.mfs > -rw-r----- 1 moosefs moosefs 316238409 Aug 5 11:16 changelog_ml.49.mfs > -rw-r----- 1 moosefs moosefs 303947274 Aug 5 12:16 changelog_ml.48.mfs > -rw-r----- 1 moosefs moosefs 305556923 Aug 5 13:16 changelog_ml.47.mfs > -rw-r----- 1 moosefs moosefs 308845050 Aug 5 14:16 changelog_ml.46.mfs > -rw-r----- 1 moosefs moosefs 307716566 Aug 5 15:16 changelog_ml.45.mfs > -rw-r----- 1 moosefs moosefs 303948503 Aug 5 16:16 changelog_ml.44.mfs > -rw-r----- 1 moosefs moosefs 313333359 Aug 5 17:16 changelog_ml.43.mfs > -rw-r----- 1 moosefs moosefs 337023497 Aug 5 18:16 changelog_ml.42.mfs > -rw-r----- 1 moosefs moosefs 305881616 Aug 5 19:16 changelog_ml.41.mfs > -rw-r----- 1 moosefs moosefs 301584318 Aug 5 20:16 changelog_ml.40.mfs > -rw-r----- 1 moosefs moosefs 303163502 Aug 5 21:16 changelog_ml.39.mfs > -rw-r----- 1 moosefs moosefs 297444355 Aug 5 22:16 changelog_ml.38.mfs > -rw-r----- 1 moosefs moosefs 297679855 Aug 5 23:16 changelog_ml.37.mfs > -rw-r----- 1 moosefs moosefs 295181131 Aug 6 00:16 changelog_ml.36.mfs > -rw-r----- 1 moosefs moosefs 300224180 Aug 6 01:16 changelog_ml.35.mfs > -rw-r----- 1 moosefs moosefs 299161812 Aug 6 02:16 changelog_ml.34.mfs > -rw-r----- 1 moosefs moosefs 296216601 Aug 6 03:16 changelog_ml.33.mfs > -rw-r----- 1 moosefs moosefs 296392724 Aug 6 04:16 changelog_ml.32.mfs > -rw-r----- 1 moosefs moosefs 297765821 Aug 6 05:16 changelog_ml.31.mfs > -rw-r----- 1 moosefs moosefs 295794408 Aug 6 06:16 changelog_ml.30.mfs > -rw-r----- 1 moosefs moosefs 296871066 Aug 6 07:16 changelog_ml.29.mfs > -rw-r----- 1 moosefs moosefs 299617403 Aug 6 08:16 changelog_ml.28.mfs > -rw-r----- 1 moosefs moosefs 304978643 Aug 6 09:16 changelog_ml.27.mfs > -rw-r----- 1 moosefs moosefs 295654889 Aug 6 10:16 changelog_ml.26.mfs > -rw-r----- 1 moosefs moosefs 296849548 Aug 6 11:16 changelog_ml.25.mfs > -rw-r----- 1 moosefs moosefs 309943330 Aug 6 12:16 changelog_ml.24.mfs > -rw-r----- 1 moosefs moosefs 304064350 Aug 6 13:16 changelog_ml.23.mfs > -rw-r----- 1 moosefs moosefs 311306306 Aug 6 14:16 changelog_ml.22.mfs > -rw-r----- 1 moosefs moosefs 308487345 Aug 6 15:16 changelog_ml.21.mfs > -rw-r----- 1 moosefs moosefs 305949946 Aug 6 16:16 changelog_ml.20.mfs > -rw-r----- 1 moosefs moosefs 303363812 Aug 6 17:16 changelog_ml.19.mfs > -rw-r----- 1 moosefs moosefs 301929661 Aug 6 18:16 changelog_ml.18.mfs > -rw-r----- 1 moosefs moosefs 301694623 Aug 6 19:16 changelog_ml.17.mfs > -rw-r----- 1 moosefs moosefs 299423179 Aug 6 20:16 changelog_ml.16.mfs > -rw-r----- 1 moosefs moosefs 304727895 Aug 6 21:16 changelog_ml.15.mfs > -rw-r----- 1 moosefs moosefs 301643234 Aug 6 22:16 changelog_ml.14.mfs > -rw-r----- 1 moosefs moosefs 296423282 Aug 6 23:16 changelog_ml.13.mfs > -rw-r----- 1 moosefs moosefs 298760945 Aug 7 00:16 changelog_ml.12.mfs > -rw-r----- 1 moosefs moosefs 303663253 Aug 7 01:16 changelog_ml.11.mfs > -rw-r----- 1 moosefs moosefs 905015296 Aug 7 02:10 metadata_ml.mfs.back > -rw-r----- 1 moosefs moosefs 258166057 Aug 7 02:11 changelog_ml_back.0.mfs > -rw-r----- 1 moosefs moosefs 301720434 Aug 7 02:11 changelog_ml_back.1.mfs > -rw-r----- 1 moosefs moosefs 293011718 Aug 7 02:16 changelog_ml.10.mfs > -rw-r----- 1 moosefs moosefs 299823254 Aug 7 03:16 changelog_ml.9.mfs > -rw-r----- 1 moosefs moosefs 298815218 Aug 7 04:16 changelog_ml.8.mfs > -rw-r----- 1 moosefs moosefs 289845042 Aug 7 05:16 changelog_ml.7.mfs > -rw-r----- 1 moosefs moosefs 295657483 Aug 7 06:16 changelog_ml.6.mfs > -rw-r----- 1 moosefs moosefs 296467382 Aug 7 07:16 changelog_ml.5.mfs > -rw-r----- 1 moosefs moosefs 289313755 Aug 7 08:16 changelog_ml.4.mfs > -rw-r----- 1 moosefs moosefs 295862156 Aug 7 09:16 changelog_ml.3.mfs > -rw-r----- 1 moosefs moosefs 299891469 Aug 7 10:16 changelog_ml.2.mfs > -rw-r----- 1 moosefs moosefs 301908990 Aug 7 11:16 changelog_ml.1.mfs > -rw-r----- 1 moosefs moosefs 12355 Aug 7 11:47 sessions_ml.mfs > drwxr-xr-x 2 moosefs moosefs 4096 Aug 7 11:47 . > -rw-r----- 1 moosefs moosefs 78761490 Aug 7 11:47 changelog_ml.0.mfs > > -----Original Message----- > From: ext Davies Liu [mailto:dav...@gm...] > Sent: Thursday, August 07, 2014 7:13 PM > To: Jakubiec, Jakub (NSN - PL/Wroclaw) > Subject: Re: [Moosefs-users] Metadata crash 60TiB > > Could you list all of files under /var/lib/mfs/ on master and > metalogger (with size and mtime)? > > Davies > > On Thu, Aug 7, 2014 at 9:35 AM, Jakubiec, Jakub (NSN - PL/Wroclaw) > <jak...@ns...> wrote: >> Hi, >> >> We are using MooseFS since 2010. Currently we have 60TiB mfs cluster based >> on 1.6.20 version. Today we have unexpected power failure. After that we are >> not able to restore metadata both mfsmaster and metaloggers. >> >> Below some details: >> >> On master: >> /etc/init.d/mfsmaster start >> Starting mfs-master: working directory: /var/lib/mfs/ >> lockfile created and locked >> initializing mfsmaster modules ... >> loading sessions ... ok >> sessions file has been loaded >> exports file has been loaded >> loading metadata ... >> loading objects (files,directories,etc.) ... ok >> loading names ... loading edge: read error: ENOENT (No such file or >> directory) >> error >> init: file system manager failed !!! >> error occured during initialization - exiting >> >> /opt/moose/sbin/mfsmetarestore -a >> can't find backed up metadata file !!! >> >> >> On metaloggers: >> /opt/moose/sbin/mfsmetarestore -m metadata_ml.mfs.back -o metadata.mfs >> changelog_ml.*.mfs >> loading objects (files,directories,etc.) ... ok >> loading names ... loading edge: read error: Unknown error >> error >> can't read metadata from file: metadata_ml.mfs.back >> >> >> Additionally some errors are repeated in syslog on master server since 17th >> July 2014: >> cat daemon.log daemon.log.1 | grep -i meta | tail -3 >> Aug 3 04:00:45 uberserver mfsmaster[26090]: can't write metadata >> Aug 3 05:00:20 uberserver mfsmaster[27178]: can't write metadata >> Aug 3 06:00:41 uberserver mfsmaster[28307]: can't write metadata >> >> Is there any chance to restore metadata and recover lost data? >> >> Please save me! >> >> Best regards. >> Jakub Jakubiec >> >> >> ------------------------------------------------------------------------------ >> Infragistics Professional >> Build stunning WinForms apps today! >> Reboot your WinForms applications with our WinForms controls. >> Build a bridge from your legacy apps to the future. >> http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > -- > - Davies -- - Davies |
From: Jakubiec, J. (N. - PL/Wroclaw) <jak...@ns...> - 2014-08-07 16:36:02
|
Hi, We are using MooseFS since 2010. Currently we have 60TiB mfs cluster based on 1.6.20 version. Today we have unexpected power failure. After that we are not able to restore metadata both mfsmaster and metaloggers. Below some details: On master: /etc/init.d/mfsmaster start Starting mfs-master: working directory: /var/lib/mfs/ lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded loading metadata ... loading objects (files,directories,etc.) ... ok loading names ... loading edge: read error: ENOENT (No such file or directory) error init: file system manager failed !!! error occured during initialization - exiting /opt/moose/sbin/mfsmetarestore -a can't find backed up metadata file !!! On metaloggers: /opt/moose/sbin/mfsmetarestore -m metadata_ml.mfs.back -o metadata.mfs changelog_ml.*.mfs loading objects (files,directories,etc.) ... ok loading names ... loading edge: read error: Unknown error error can't read metadata from file: metadata_ml.mfs.back Additionally some errors are repeated in syslog on master server since 17th July 2014: cat daemon.log daemon.log.1 | grep -i meta | tail -3 Aug 3 04:00:45 uberserver mfsmaster[26090]: can't write metadata Aug 3 05:00:20 uberserver mfsmaster[27178]: can't write metadata Aug 3 06:00:41 uberserver mfsmaster[28307]: can't write metadata Is there any chance to restore metadata and recover lost data? Please save me! Best regards. Jakub Jakubiec |
From: WK <wk...@bn...> - 2014-08-05 18:39:24
|
We did not have good luck with VMs directly on MooseFS. For the most part it worked very well if the MooseFS cluster was not under stress, but if a chunkserver was lost and you were replicating and then you started getting hit with a lot of deletion activity, the VM images could go Read-Only. Increasing the time outs helped that, but sometimes, the image became so unstable as to become corrupt. Changing IO schedulers seemed to help as well, though I forgot which one was most effective. The newer versions of MooseFS has more controls to really slow down replication/delete activity which would help a lot, but by then we were discouraged and they wouldn't really help when one of the images itself got write intensive. So instead we put the VMs on fixed disk (RAID1) and mounted the Moose partitions from within the VM. That works well, since the data is composed of lots of smaller files and all the OS activity such as buffering is occurring on the fixed disk subsystem. -wk On 8/4/14, 5:01 PM, Warren Myers wrote: > Is it considered advisable to run vmdks off a MooseFS installation in > a live virtualization environment as a kind of distributed SAN? > > *Warren Myers* > http://antipaucity.com > https://www.digitalocean.com/?refcode=d197a961987a > > > ------------------------------------------------------------------------------ > Infragistics Professional > Build stunning WinForms apps today! > Reboot your WinForms applications with our WinForms controls. > Build a bridge from your legacy apps to the future. > http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Warren M. <wa...@an...> - 2014-08-05 00:13:50
|
Is it considered advisable to run vmdks off a MooseFS installation in a live virtualization environment as a kind of distributed SAN? Warren Myers http://antipaucity.com https://www.digitalocean.com/?refcode=d197a961987a |
From: cheliequan <lie...@i-...> - 2014-08-02 04:17:34
|
Hi All: I have builded a lizardfs binary distribution for centos 7. Anyone who wanted to use can download from the link as follows. the thrift and polonaise programe is also builded into rpm package. It works fine under my lab contidions. 链接:http://pan.baidu.com/s/1eQAFhYi 密码:zy1k Best Regards! 车烈权 Add:5 Floor,No.4,Silicon Valley Bright City, NO.1 Nongda South Road,Haidian District, Beijing, P.R.China 地址:中国北京市海淀区农大南路1号硅谷亮城4号楼5层 Zip Code:100084 Tel: 13436881158 Fax: +86 010 8248 4849 Mail: lie...@i-... 网址:http://www.i-soft.com.cn |
From: Joseph L. <jo...@ge...> - 2014-08-01 21:33:48
|
Hi, I’ve been trying to get a feel for MooseFS’s performance, especially after learning that MooseFS is available in the FreeBSD ports tree. It seems, however, that the MooseFS fuse-client is not stable on FreeBSD 10.0. I installed the chunk server on 3 nodes, a master on a 4th. I then installed the fuse-client on a separate FreeBSD system, and did some tests with ‘dd’ (e.g., dd if=/dev/zero of=test.file). Performance was so-so, but dd isn’t always a really great experiment. Eventually, I decided I needed something designed to test disk IO performance, and ran “iozone -a”. It ran for a for a little bit, then threw an unexpected error at me: > Can not open temp file: iozone.tmp > open: Device not configured OR (in the case of a previous attempt): > Error writing block 0, fd= 3 > write: Input/output error > iozone: interrupted > exiting iozone In both cases, the mounted filesystem was no longer usable. Even running “ls” from the mount would give this error: > ls: .: Device not configured Then I tried the MooseFS fuse client on mac. It not only ran successfully, but for the tests that I could compare, it gave output figures substantially higher (some which suggest some caching, compression, or throughput calculation issue, as they exceeded the gigabit network link). Are there known issues with MooseFS on FreeBSD 10, specifically the fuse-client? If anyone has suggestions or experience, I’d love to hear it. Thanks, -Joe |
From: Adam K. D. <ad...@dm...> - 2014-07-21 16:48:14
|
Hi, This is fantastic news and I look forward to trying it out. Good news about the shadow-master implementation especially! Regards, Adam Dean ----- Original Message ----- From: "Marcin Konarski" <mar...@li...> To: liz...@li..., moo...@li... Sent: Monday, 21 July, 2014 4:03:59 PM Subject: [Moosefs-users] Lizardfs 2.5.0 Release announcement Dear users, we are pleased to announce the release of LizardFS 2.5.0. Lizardfs is a fork of MooseFS 1.6.27 (compatible up to the version 1.6.27-5). Lizardfs is released under GPL license and its sources are available for download from github: https://github.com/lizardfs/lizardfs LizardFS is continuously under active development, next release is planned within next two months and will contain further improvements to HA (Pacemaker integration). LizardFS supports both latest Ubuntu LTS and Debian Stable releases through repositories of binary packages: http://lizardfs.com/download/ Binary distributions for other OSes are coming soon. Users using MooseFS version up to 1.6.27-5 can easily upgrade their systems to LizardFS 2.5.0 (version 2.0 of MooseFS do not have straight upgrade path.) This release adds several major features often requested by our users: - high availbility of the whole filesystem - disk space and inode quotas for users and groups - POSIX Access Control Lists - POSIX Extended Attributes - I/O bandwidth limiting (globally and per-mountpoint) - command line monitoring tool Other improvements include: - high performance CRC checksum implementation - reduced overhead of hourly metadata backups High availbility LizardFS 2.5.0 includes the new 'shadow master' server which maintains a current copy of filesystem metadata and is prepared to immediately replace the master metadata server in case of a failure. This feature combined with replication of file data allows LizardFS to remain fully operational after crash of any particular machine in the cluster. Shadow master obtains metadata updates from the master server using the old MooseFS/LizardFS metalogger protocol. This facilitates no-downtime upgrades from older version of LizardFS and from MooseFS 1.6.27. Quotas LizardFS 2.5.0 provides a quota mechanism similar to these found in common UNIX filesystems. Filesystem administrator can define limits on the number of files or amount of disk space owned by users and groups. Quotas are controlled by special tools 'mfssetquota' and 'mfsrepquota', which work analogously to the generic 'setquota' and 'repquota' tools. POSIX Access Control Lists ACLs allow fine-grained control over individual users' rights to read or modify particular files and directories. LizardFS 2.5.0 supports POSIX ACLs on Linux using the generic 'setfacl' and 'getfacl' tools. This feature requires our modifed version of fuse (kernel module and the library). POSIX Extended Attributes LizardFS 2.5.0 supports POSIX xattrs using generic tools. I/O bandwidth limiting LizardFS 2.5.0 provides an I/O bandwidth limiting mechanism integrated with Linux cgroups. Applications reading from the filesystem are grouped by their cgroup ID and the administrator can place limits on bandwidth consumed to satisfy read and write requests from each group. The limits can be enforced globally (across multiple mountpoints) or locally (on one mountpoint). Command line monitoring tool LizardFS 2.5.0 introduces the new 'lizardfs-probe' command line tool which queries the master server for various information concerning filesystem health and activity. High performance CRC checksum implementation In LizardFS 2.5.0, we switched to a new, three times faster CRC implementation to reduce latency and overhead of checksum calculations. Reduced overhead of hourly metadata backups LizardFS 2.5.0 provides a new metadata backup generation mechanism which works fully in background and doesn't affect the master server. Regads, Marcin Konarski ------------------------------------------------------------------------------ Want fast and easy access to all the code in your enterprise? Index and search up to 200,000 lines of code with a free copy of Black Duck Code Sight - the same software that powers the world's largest code search on Ohloh, the Black Duck Open Hub! Try it now. http://p.sf.net/sfu/bds _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Marcin K. <mar...@li...> - 2014-07-21 15:29:20
|
Dear users, we are pleased to announce the release of LizardFS 2.5.0. Lizardfs is a fork of MooseFS 1.6.27 (compatible up to the version 1.6.27-5). Lizardfs is released under GPL license and its sources are available for download from github: https://github.com/lizardfs/lizardfs LizardFS is continuously under active development, next release is planned within next two months and will contain further improvements to HA (Pacemaker integration). LizardFS supports both latest Ubuntu LTS and Debian Stable releases through repositories of binary packages: http://lizardfs.com/download/ Binary distributions for other OSes are coming soon. Users using MooseFS version up to 1.6.27-5 can easily upgrade their systems to LizardFS 2.5.0 (version 2.0 of MooseFS do not have straight upgrade path.) This release adds several major features often requested by our users: - high availbility of the whole filesystem - disk space and inode quotas for users and groups - POSIX Access Control Lists - POSIX Extended Attributes - I/O bandwidth limiting (globally and per-mountpoint) - command line monitoring tool Other improvements include: - high performance CRC checksum implementation - reduced overhead of hourly metadata backups High availbility LizardFS 2.5.0 includes the new 'shadow master' server which maintains a current copy of filesystem metadata and is prepared to immediately replace the master metadata server in case of a failure. This feature combined with replication of file data allows LizardFS to remain fully operational after crash of any particular machine in the cluster. Shadow master obtains metadata updates from the master server using the old MooseFS/LizardFS metalogger protocol. This facilitates no-downtime upgrades from older version of LizardFS and from MooseFS 1.6.27. Quotas LizardFS 2.5.0 provides a quota mechanism similar to these found in common UNIX filesystems. Filesystem administrator can define limits on the number of files or amount of disk space owned by users and groups. Quotas are controlled by special tools 'mfssetquota' and 'mfsrepquota', which work analogously to the generic 'setquota' and 'repquota' tools. POSIX Access Control Lists ACLs allow fine-grained control over individual users' rights to read or modify particular files and directories. LizardFS 2.5.0 supports POSIX ACLs on Linux using the generic 'setfacl' and 'getfacl' tools. This feature requires our modifed version of fuse (kernel module and the library). POSIX Extended Attributes LizardFS 2.5.0 supports POSIX xattrs using generic tools. I/O bandwidth limiting LizardFS 2.5.0 provides an I/O bandwidth limiting mechanism integrated with Linux cgroups. Applications reading from the filesystem are grouped by their cgroup ID and the administrator can place limits on bandwidth consumed to satisfy read and write requests from each group. The limits can be enforced globally (across multiple mountpoints) or locally (on one mountpoint). Command line monitoring tool LizardFS 2.5.0 introduces the new 'lizardfs-probe' command line tool which queries the master server for various information concerning filesystem health and activity. High performance CRC checksum implementation In LizardFS 2.5.0, we switched to a new, three times faster CRC implementation to reduce latency and overhead of checksum calculations. Reduced overhead of hourly metadata backups LizardFS 2.5.0 provides a new metadata backup generation mechanism which works fully in background and doesn't affect the master server. Regads, Marcin Konarski |
From: Marcin K. <mar...@li...> - 2014-07-21 15:17:22
|
Dear users, we are pleased to announce the release of LizardFS 2.5.0. Lizardfs is a fork of MooseFS 1.6.27 (compatible up to the version 1.6.27-5). Lizardfs is released under GPL license and its sources are available for download from github: https://github.com/lizardfs/lizardfs LizardFS is continuously under active development, next release is planned within next two months and will contain further improvements to HA (Pacemaker integration). LizardFS supports both latest Ubuntu LTS and Debian Stable releases through repositories of binary packages: http://lizardfs.com/download/ Binary distributions for other OSes are coming soon. Users using MooseFS version up to 1.6.27-5 can easily upgrade their systems to LizardFS 2.5.0 (version 2.0 of MooseFS do not have straight upgrade path.) This release adds several major features often requested by our users: - high availbility of the whole filesystem - disk space and inode quotas for users and groups - POSIX Access Control Lists - POSIX Extended Attributes - I/O bandwidth limiting (globally and per-mountpoint) - command line monitoring tool Other improvements include: - high performance CRC checksum implementation - reduced overhead of hourly metadata backups High availbility LizardFS 2.5.0 includes the new 'shadow master' server which maintains a current copy of filesystem metadata and is prepared to immediately replace the master metadata server in case of a failure. This feature combined with replication of file data allows LizardFS to remain fully operational after crash of any particular machine in the cluster. Shadow master obtains metadata updates from the master server using the old MooseFS/LizardFS metalogger protocol. This facilitates no-downtime upgrades from older version of LizardFS and from MooseFS 1.6.27. Quotas LizardFS 2.5.0 provides a quota mechanism similar to these found in common UNIX filesystems. Filesystem administrator can define limits on the number of files or amount of disk space owned by users and groups. Quotas are controlled by special tools 'mfssetquota' and 'mfsrepquota', which work analogously to the generic 'setquota' and 'repquota' tools. POSIX Access Control Lists ACLs allow fine-grained control over individual users' rights to read or modify particular files and directories. LizardFS 2.5.0 supports POSIX ACLs on Linux using the generic 'setfacl' and 'getfacl' tools. This feature requires our modifed version of fuse (kernel module and the library). POSIX Extended Attributes LizardFS 2.5.0 supports POSIX xattrs using generic tools. I/O bandwidth limiting LizardFS 2.5.0 provides an I/O bandwidth limiting mechanism integrated with Linux cgroups. Applications reading from the filesystem are grouped by their cgroup ID and the administrator can place limits on bandwidth consumed to satisfy read and write requests from each group. The limits can be enforced globally (across multiple mountpoints) or locally (on one mountpoint). Command line monitoring tool LizardFS 2.5.0 introduces the new 'lizardfs-probe' command line tool which queries the master server for various information concerning filesystem health and activity. High performance CRC checksum implementation In LizardFS 2.5.0, we switched to a new, three times faster CRC implementation to reduce latency and overhead of checksum calculations. Reduced overhead of hourly metadata backups LizardFS 2.5.0 provides a new metadata backup generation mechanism which works fully in background and doesn't affect the master server. Regads, Marcin Konarski |
From: Michael T. <mic...@ho...> - 2014-07-05 02:05:22
|
What is the upgrade process? Is it still clients then metaloggers then master then chunks? --- mike t. |
From: Davies L. <dav...@gm...> - 2014-07-02 21:15:39
|
When master dump the metadata into disk, it will try to fork and do it in child process. If you have not enough free memory as that used in mfsmaster, fork() will failed, the dumping will happen in current thread, it will block all the operations, causing timeout for connections. Enable overcommit will help in this case, see http://sourceforge.net/p/moosefs/mailman/moosefs-users/thread/531...@pu.../ On Wed, Jul 2, 2014 at 3:58 AM, Ason Hu <tob...@gm...> wrote: > Hi > > I found master somehow will disconnect chunkserver and reset client > connection when master memory consume at particular % > > At first master fail, it have 64GB memory, and have 100 million files write > to mfs space > > Files : about 100 million > Memory usage from cgi server : 42GB > Memory usage from system : 62GB > Physical memory : 64GB > > > After master fail, I add extra 64GB on it to proceed testing > Oh yeah~ master revive > Until files up to 150 million, master fail again.............. > > Files : about 150 million > Memory usage from cgi server : 61GB > Memory usage from system : 110GB > Physical memory : 128GB > > It seems that master still many free ram cached, but master daemon can not > use it further. > > Dose master have least free memory limit for it to run correctlly? > > > Regards, > Ason > > > > My MFS Cluster : > > Master : > OS : CentOS 6.5 x86_64 > FS : EXT4 > RAM : 64GB > > Chunk Server (*3) : > OS : CentOS 6.5 x86_64 > FS : XFS > HDD : 48TB > RAM : 64GB > > ------------------------------------------------------------------------------ > Open source business process management suite built on Java and Eclipse > Turn processes into business applications with Bonita BPM Community Edition > Quickly connect people, data, and systems into organized workflows > Winner of BOSSIE, CODIE, OW2 and Gartner awards > http://p.sf.net/sfu/Bonitasoft > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Paul G. <go...@gm...> - 2014-07-02 13:22:28
|
Dear Krzysztof, http://moosefs.com/products.html says that CE version is still open source, but there is no links to download source code and no source packages in repository. Is it somehow possible to download a source code for the new version? Thanks, Paul. On 07/01/2014 06:31 PM, Krzysztof Kielak wrote: > > Dear MooseFS Users, > > > > We are pleased to announce that the new version of MooseFS is publicly > available starting today! > > > > System will be distributed in two flavors: > > - Community Edition (CE), which is the continuation of previous line > of Open-Source products available at moosefs.org, > > - Professional Edition (Pro) targeting Enterprise Storage market with > features like long-awaited High Availability for master servers, which > puts the system in a new class of storage solutions. > > > > Binary packages from officially supported repositories for multiple > platforms are now available through most commonly used package > managers. For detailed instructions please go to > http://get.moosefs.com and follow instructions for your environment. > > > > Core Technology, the independent entity responsible for MooseFS > development, is former member of Gemius Group (http://gemius.com) > where MooseFS is the central component driving variety of products in > Internet Monitoring/Research area with 300 thousand events collected > every second and stored on 2.5 PB MooseFS instance. > > > > Our mission is to make this technology available to others, as we > think it is extremely effective and easy to use, as it was in our case. > > > > The newest version has a lot of updates and new, exciting features. > MooseFS Pro license is sold together with technical support. Price > depends on the size of your installation (number of physical disks, > number of physical servers, size of raw data). > > > > In case of any questions don't hesitate to contact us at > co...@mo... > > > > Best Regards, > > Krzysztof Kielak > > *Director of Operations and Customer Support* > > Mobile: +48 601 476 440 > > > > > > ------------------------------------------------------------------------------ > Open source business process management suite built on Java and Eclipse > Turn processes into business applications with Bonita BPM Community Edition > Quickly connect people, data, and systems into organized workflows > Winner of BOSSIE, CODIE, OW2 and Gartner awards > http://p.sf.net/sfu/Bonitasoft > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ason Hu <tob...@gm...> - 2014-07-02 10:59:27
|
Hi I found master somehow will disconnect chunkserver and reset client connection when master memory consume at particular % At first master fail, it have 64GB memory, and have 100 million files write to mfs space *Files : about 100 millionMemory usage from cgi server : 42GB* *Memory usage from system : 62GB* *Physical memory : 64GB* After master fail, I add extra 64GB on it to proceed testing Oh yeah~ master revive Until files up to 150 million, master fail again.............. *Files : about 150 million Memory usage from cgi server : 61GB* *Memory usage from system : 110GB* *Physical memory : 128GB* It seems that master still many free ram cached, but master daemon can not use it further. Dose master have least free memory limit for it to run correctlly? Regards, Ason My MFS Cluster : Master : OS : CentOS 6.5 x86_64 FS : EXT4 RAM : 64GB Chunk Server (*3) : OS : CentOS 6.5 x86_64 FS : XFS HDD : 48TB RAM : 64GB |
From: Laurent W. <lw...@hy...> - 2014-07-02 09:39:30
|
On Tue, 1 Jul 2014 16:31:02 +0200 "Krzysztof Kielak" <krz...@co...> wrote: > Dear MooseFS Users, > Hi Krzysztof, Kudos for releasing 2.0. Two questions: - What’s the licence ? - Will sources be available ? Another thing is: I « own » #moosefs channel on freenode. If any of you (Core tech) is willing to take it back, I’ll happily give the foundership of the chan. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Alexander A. <akh...@ri...> - 2014-07-02 07:35:36
|
It's OK. Thank you Krzysztof ! Port for FreeBSD 10 will be excellent. I will wait for it. Also it is interesting to read changes/improvement/new features list of new 2.0 version. Will You release them? Alexander ====================================================== Dear Alexander, We are working on FreeBSD ports for version 10 and I hope we will be able to release it soon. At the moment we are able to deliver just binaries for FreeBSD starting from version 7 - if it helps please let me know. We'll try to cover this upgrade scenario in more detail. Generally speaking MooseFS 2.0 upon startup will read and convert metadata file for previous versions of the system. Since new packages have a different naming convention, it is required to uninstall version 1.6 before the upgrade to 2.0. It is strongly recommended to backup metadata before the upgrade. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, July 2, 2014 8:25 AM To: Krzysztof Kielak Cc: 'Alexey Silk'; moo...@li... Subject: Re: [Moosefs-users] MooseFS 2.0 released today! Hi! I wonder will it be a FreeBSD port available for 2.0 version ? And I didn't see any explanation for upgrade from ver 1 to ver 2 procedure :--( wbr Alexander ====================================================== |
From: Krzysztof K. <krz...@co...> - 2014-07-02 06:41:19
|
Dear Alexander, We are working on FreeBSD ports for version 10 and I hope we will be able to release it soon. At the moment we are able to deliver just binaries for FreeBSD starting from version 7 - if it helps please let me know. We'll try to cover this upgrade scenario in more detail. Generally speaking MooseFS 2.0 upon startup will read and convert metadata file for previous versions of the system. Since new packages have a different naming convention, it is required to uninstall version 1.6 before the upgrade to 2.0. It is strongly recommended to backup metadata before the upgrade. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, July 2, 2014 8:25 AM To: Krzysztof Kielak Cc: 'Alexey Silk'; moo...@li... Subject: Re: [Moosefs-users] MooseFS 2.0 released today! Hi! I wonder will it be a FreeBSD port available for 2.0 version ? And I didn't see any explanation for upgrade from ver 1 to ver 2 procedure :--( wbr Alexander ====================================================== Alexy, Currently the only difference between CE and Pro versions is High Availability option in Pro. You can see more detailed explanation of differences at http://moosefs.com/products.html Future releases of the product will add more features to Pro version. Version CE is based on the same code base. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 From: Alexey Silk [mailto:al...@si...] Sent: Wednesday, July 2, 2014 6:16 AM To: Krzysztof Kielak; moo...@li... Subject: НА: [Moosefs-users] MooseFS 2.0 released today! What differences is between CE and pro? Is there any change log? От: Krzysztof Kielak Отправлено: 01.07.2014 22:35 Кому: moo...@li... Тема: [Moosefs-users] MooseFS 2.0 released today! Dear MooseFS Users, We are pleased to announce that the new version of MooseFS is publicly available starting today! System will be distributed in two flavors: - Community Edition (CE), which is the continuation of previous line of Open-Source products available at moosefs.org, - Professional Edition (Pro) targeting Enterprise Storage market with features like long-awaited High Availability for master servers, which puts the system in a new class of storage solutions. Binary packages from officially supported repositories for multiple platforms are now available through most commonly used package managers. For detailed instructions please go to http://get.moosefs.com and follow instructions for your environment. Core Technology, the independent entity responsible for MooseFS development, is former member of Gemius Group (http://gemius.com) where MooseFS is the central component driving variety of products in Internet Monitoring/Research area with 300 thousand events collected every second and stored on 2.5 PB MooseFS instance. Our mission is to make this technology available to others, as we think it is extremely effective and easy to use, as it was in our case. The newest version has a lot of updates and new, exciting features. MooseFS Pro license is sold together with technical support. Price depends on the size of your installation (number of physical disks, number of physical servers, size of raw data). In case of any questions don't hesitate to contact us at co...@mo... Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Alexander A. <akh...@ri...> - 2014-07-02 06:25:39
|
Hi! I wonder will it be a FreeBSD port available for 2.0 version ? And I didn't see any explanation for upgrade from ver 1 to ver 2 procedure :--( wbr Alexander ====================================================== Alexy, Currently the only difference between CE and Pro versions is High Availability option in Pro. You can see more detailed explanation of differences at http://moosefs.com/products.html Future releases of the product will add more features to Pro version. Version CE is based on the same code base. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 From: Alexey Silk [mailto:al...@si...] Sent: Wednesday, July 2, 2014 6:16 AM To: Krzysztof Kielak; moo...@li... Subject: НА: [Moosefs-users] MooseFS 2.0 released today! What differences is between CE and pro? Is there any change log? От: Krzysztof Kielak Отправлено: 01.07.2014 22:35 Кому: moo...@li... Тема: [Moosefs-users] MooseFS 2.0 released today! Dear MooseFS Users, We are pleased to announce that the new version of MooseFS is publicly available starting today! System will be distributed in two flavors: - Community Edition (CE), which is the continuation of previous line of Open-Source products available at moosefs.org, - Professional Edition (Pro) targeting Enterprise Storage market with features like long-awaited High Availability for master servers, which puts the system in a new class of storage solutions. Binary packages from officially supported repositories for multiple platforms are now available through most commonly used package managers. For detailed instructions please go to http://get.moosefs.com and follow instructions for your environment. Core Technology, the independent entity responsible for MooseFS development, is former member of Gemius Group (http://gemius.com) where MooseFS is the central component driving variety of products in Internet Monitoring/Research area with 300 thousand events collected every second and stored on 2.5 PB MooseFS instance. Our mission is to make this technology available to others, as we think it is extremely effective and easy to use, as it was in our case. The newest version has a lot of updates and new, exciting features. MooseFS Pro license is sold together with technical support. Price depends on the size of your installation (number of physical disks, number of physical servers, size of raw data). In case of any questions don't hesitate to contact us at co...@mo... Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Krzysztof K. <krz...@co...> - 2014-07-02 05:54:02
|
Alexy, Currently the only difference between CE and Pro versions is High Availability option in Pro. You can see more detailed explanation of differences at <http://moosefs.com/products.html> http://moosefs.com/products.html Future releases of the product will add more features to Pro version. Version CE is based on the same code base. Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 From: Alexey Silk [mailto:al...@si...] Sent: Wednesday, July 2, 2014 6:16 AM To: Krzysztof Kielak; moo...@li... Subject: НА: [Moosefs-users] MooseFS 2.0 released today! What differences is between CE and pro? Is there any change log? _____ От: Krzysztof Kielak <mailto:krz...@co...> Отправлено: 01.07.2014 22:35 Кому: moo...@li... <mailto:moo...@li...> Тема: [Moosefs-users] MooseFS 2.0 released today! Dear MooseFS Users, We are pleased to announce that the new version of MooseFS is publicly available starting today! System will be distributed in two flavors: - Community Edition (CE), which is the continuation of previous line of Open-Source products available at moosefs.org, - Professional Edition (Pro) targeting Enterprise Storage market with features like long-awaited High Availability for master servers, which puts the system in a new class of storage solutions. Binary packages from officially supported repositories for multiple platforms are now available through most commonly used package managers. For detailed instructions please go to http://get.moosefs.com and follow instructions for your environment. Core Technology, the independent entity responsible for MooseFS development, is former member of Gemius Group (http://gemius.com) where MooseFS is the central component driving variety of products in Internet Monitoring/Research area with 300 thousand events collected every second and stored on 2.5 PB MooseFS instance. Our mission is to make this technology available to others, as we think it is extremely effective and easy to use, as it was in our case. The newest version has a lot of updates and new, exciting features. MooseFS Pro license is sold together with technical support. Price depends on the size of your installation (number of physical disks, number of physical servers, size of raw data). In case of any questions don't hesitate to contact us at co...@mo... <mailto:co...@mo...> Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |
From: Alexey S. <al...@si...> - 2014-07-02 04:43:21
|
What differences is between CE and pro? Is there any change log? -----Исходное сообщение----- От: "Krzysztof Kielak" <krz...@co...> Отправлено: 01.07.2014 22:35 Кому: "moo...@li..." <moo...@li...> Тема: [Moosefs-users] MooseFS 2.0 released today! Dear MooseFS Users, We are pleased to announce that the new version of MooseFS is publicly available starting today! System will be distributed in two flavors: - Community Edition (CE), which is the continuation of previous line of Open-Source products available at moosefs.org, - Professional Edition (Pro) targeting Enterprise Storage market with features like long-awaited High Availability for master servers, which puts the system in a new class of storage solutions. Binary packages from officially supported repositories for multiple platforms are now available through most commonly used package managers. For detailed instructions please go to http://get.moosefs.com and follow instructions for your environment. Core Technology, the independent entity responsible for MooseFS development, is former member of Gemius Group (http://gemius.com) where MooseFS is the central component driving variety of products in Internet Monitoring/Research area with 300 thousand events collected every second and stored on 2.5 PB MooseFS instance. Our mission is to make this technology available to others, as we think it is extremely effective and easy to use, as it was in our case. The newest version has a lot of updates and new, exciting features. MooseFS Pro license is sold together with technical support. Price depends on the size of your installation (number of physical disks, number of physical servers, size of raw data). In case of any questions don't hesitate to contact us at co...@mo... Best Regards, Krzysztof Kielak Director of Operations and Customer Support Mobile: +48 601 476 440 |