You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: yishi c. <hol...@gm...> - 2012-05-21 04:09:38
|
the MFS can hold billions of file,and for each chunk ,in spite of it is 64MB ,*It contains more than one files*.until the total file size of one single chunk reached 64MB. however the MFS stored file information with *hash bucket*,the size of hash table is 2^22.if you stored too many files, *the list of each hash unit will be too long*,saying the list size will be more than 300,that will cause some *lost on performance of mfs.* for small files ,the TFS will be a good choice,*but you need to do some coding work with TFS's client* 2012/5/21 <moo...@li...> > Send moosefs-users mailing list submissions to > moo...@li... > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.sourceforge.net/lists/listinfo/moosefs-users > or, via email, send a message with subject or body 'help' to > moo...@li... > > You can reach the person managing the list at > moo...@li... > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of moosefs-users digest..." > > > Today's Topics: > > 1. Re: ??: I want use MFS ,need help,thanks (Ken) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 21 May 2012 11:57:34 +0800 > From: Ken <ken...@gm...> > Subject: Re: [Moosefs-users] ??: I want use MFS ,need help,thanks > To: gaoyafang <ga...@co...> > Cc: moo...@li... > Message-ID: > <CAJ...@ma... > > > Content-Type: text/plain; charset="utf-8" > > >> MFS support store many small files in one chunk (for example 64MB ) in > the future? > I do not think it should. (not officially) > > Did your read 'Solution of small file store' ( > http://sourceforge.net/mailarchive/message.php?msg_id=29244311). > > > Sincerely > -Ken > > > > On Mon, May 21, 2012 at 10:55 AM, gaoyafang <ga...@co...> wrote: > > > **** > > > > I want use MFS to store over billion small email files,average email file > > size is 5K?**** > > > > 64K block size seem too big,so want to test 4k block size.It is very > > important for our project**** > > > > MFS support store many small files in one chunk (for example 64MB ) in > the > > future?**** > > > > ** ** > > > > **** > > > > ** ** > > > > *???:* Micha? Borychowski [mailto:mic...@co...] > > *????:* 2012?4?17? 14:02 > > *???:* 'gaoyf' > > *??:* moo...@li... > > *??:* RE: [Moosefs-users] I want use MFS ,need help,thanks**** > > > > ** ** > > > > Hi!**** > > > > 1. MySQL will run on MooseFS but probably not with the performance you > > would expect. Or maybe you build SSD-based chunkservers.**** > > > > 2. Chunk size as well as block size are hardcoded ( > > http://www.moosefs.org/moosefs-faq.html#modify_chunk) and really you > > should not change its value in the code as it?s not that simple as just > > changing one value.**** > > > > Probably you are worried by losing space on the storage with bigger block > > size. But have a look here: > > http://www.moosefs.org/moosefs-faq.html#source_code - space lost is not > > that big: ?project with 100,000 small files would consume at most 13GiB > of > > extra space due to this internal chunk fragmentation.?**** > > > > Kind regards**** > > > > Micha? Borychowski **** > > > > MooseFS Support Manager**** > > > > ** ** > > > > *From:* gaoyf [mailto:ga...@co...] > > *Sent:* Monday, April 16, 2012 10:45 AM > > *To:* moo...@li... > > *Subject:* [Moosefs-users] I want use MFS ,need help,thanks**** > > > > ** ** > > > > Hello**** > > > > I test MFS in a project.I have two questions:**** > > > > 1 Can run Mysql database on MFS?**** > > > > 2 Can change MFS block size from 64KB to 4KB? I want use MFS to storage > > many small files; I find *MFSBLOCKSIZE in *file ?MFSCommunication.h?,Can > > change it?**** > > > > ** ** > > > > ** ** > > > > ** ** > > > > ** ** > > > > <http://www.21cn.com/> > > > > ??? > > 21CN.COM ???? > > ???????????????211????????? > > ???510630 > > ???86-020-85115238 > > ???86-020-85115111 > > ???18926128676**** > > > > ???ga...@co... **** > > > > ** ** > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: not available > Type: image/gif > Size: 3831 bytes > Desc: not available > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: not available > Type: image/jpeg > Size: 5675 bytes > Desc: not available > > ------------------------------ > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > ------------------------------ > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > End of moosefs-users Digest, Vol 29, Issue 23 > ********************************************* > |
From: Michal B. <mic...@co...> - 2012-05-19 18:33:15
|
Hi! Please remember that marking disk for removal doesn't mean moving the chunks. They are copied to other disks. Please also note that you have 'all' and 'regular' view in the Info tab (http://screencast.com/t/Dr2O9tviOA). Kind regards Michał Borychowski From: Karol Pasternak [mailto:kar...@ar...] Sent: Thursday, May 17, 2012 2:21 PM To: moo...@li... Subject: [Moosefs-users] How to speedup replication from disk marked for removal Hi, I've set one 500GB disk for removal 2 weeks ago. After this time, it's still used 74% of it. Week ago I set: CHUNKS_WRITE_REP_LIMIT = 15 CHUNKS_READ_REP_LIMIT = 25 Why it takes so much long time? br. Karol |
From: Michal B. <mic...@co...> - 2012-05-19 08:39:00
|
Hi! Users of MooseFS may now be interested in a new feature of ext4 called "bigalloc" introduced in 3.2 kernel. According to http://lwn.net/Articles/469805/: "The "bigalloc" patch set adds the concept of "block clusters" to the filesystem; rather than allocate single blocks, a filesystem using clusters will allocate them in larger groups. Mapping between these larger blocks and the 4KB blocks seen by the core kernel is handled entirely within the filesystem." Setting 64KB cluster size may make sense as MooseFS operates on 64KB blocks. We have not tested it but we can expect it may give some performance boost. It would also depend on the average size of the files in your system. And as MooseFS doesn't support deduplication by itself you can also consider using dedup functionality in ZFS. Kind regards Michal Borychowski From: Allen, Benjamin S [mailto:bs...@la...] Sent: Friday, May 18, 2012 5:30 PM To: moo...@li... Subject: Re: [Moosefs-users] Best FileSystem for chunkers. My chunkservers are on top of ZFS pools on Solaris. Using gzip-1 I get 2.32x, which is along the lines of the compress ratio I get with similar systems serving NFS. Note, my data is inherently well compressible. With 2x Intel X5675, load is never an issue. As you up the level of gzip you'll see a diminishing return, and pretty heavy hits on CPU load. I'd also suggest using ZFS for raid if you care about single stream performance. Serve up one or two big zpools per chunkserver to MFS. Keep in mind the size of your pool however, as having MFS fail off that HD will can take ages. Also of course you'll loose capacity in this approach to parity of RAIDZ or RAIDZ2, and then again to MFS' goal > 1 if you want high availability. If you're thinking of using ZFS, I'd highly suggest using one of the Illumos based OSes instead of FreeBSD or Linux variants. The Linux port is still pretty young in my opinion. I'd suggest Illumian, http://illumian.org/ which grew out of Nexenta Core. By the way MFS is the only distributed FS that I know of that compiles and runs well on Solaris. I've found small file performance isn't all that great in this setup. Sub what NFS can do on a similar ZFS pool, so I wouldn't get your hopes up much for it to solve this issue. You could perhaps throw a good amount of small SSD drives at ZFS' ZIL to improve synchronous write speeds, but when using ZIL you're funneling all your synchronous writes through the ZIL devices. So while using two SSDs will likely give you a touch better latency, it will kill your throughput compared to a full chassis of drives. I've also tested use of L2Arc on MLC SSDs for read cache. If its affordable for you, I'd suggest throwing RAM in the box for L1ARC instead. At least in my workload, I see very little L2Arc hits. Most hits (90%) comes from L1ARC in memory in my chunkservers that have 96GB. Next series of systems I buy will have 128G, and I'll cut my L2ARC SSDs to less than half them of my current systems( 2.4T -> 960G). I guessing I could actually remove L2ARC all together and not see a performance hit, but I haven't done enough benchmarking to prove that one way or another. Ben On May 17, 2012, at 2:22 PM, Steve Wilson wrote: On 05/17/2012 04:17 PM, Steve Wilson wrote: On 05/17/2012 04:05 PM, Atom Powers wrote: On 05/17/2012 12:44 PM, Steve Wilson wrote: On 05/17/2012 03:26 PM, Atom Powers wrote: * Compression, 1.16x in my environment I don't know if 1.16x would give me much improvement in performance. I typically see about 1.4x on my ZFS backup servers which made me think that this reduction in disk I/O could result in improved overall performance for MooseFS. Not for performance, for disk efficiency. Ostensibly those 64MiB chunks won't always use 64MiB with compression on, especially for smaller files. This is a good point and it might help where it's most needed: all those small configuration files, etc. that have a large impact on the user's perception of disk performance. Bad: * high RAM requirement Is the high RAM due to using raidz{2-3}? I was thinking of making each disk a separate ZFS volume and then letting MooseFS combine the disks into an MFS volume (i.e., no raidz). I realize that greater performance could be achieved by striping across disks in the chunk servers but I'm willing to trade off that performance gain for higher redundancy (in the case of using simple striping) and/or greater capacity (in the case of using raidz, raidz2, or raidz3). ZFS does a lot of caching in RAM. My chunk servers use hardware RAID, not raidz, and still use several hundred MiB of RAM. Personally, I would prefer to use raidz for muliple disks over MooseFS, because managing individual disks and disk failures should be much better. For example, to minimize the amount of re-balancing MooseFS needs to do; not to mention the possible performance benefit. But I can think of no reason why you couldn't do a combination of both. That is certainly worth considering. I hope to have enough time with the new chunk servers to try out different configurations before I have to put them into service. Steve ---------------------------------------------------------------------------- -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Deon C. <deo...@gm...> - 2012-05-19 01:09:06
|
On Sat, May 12, 2012 at 4:00 AM, Boris Epstein <bor...@gm...> wrote: > Now that we are on the topic - if I need to build an SMB/CIFS gateway to > MooseFs what in your opinion would be the best way to do that? > Hi Boris, I currently use moose as third tier backup storage. I run my masters/metaloggers on SL6.2. None of my clients connect directly via mfsmount. On the moose master I have samba re-exporting a mfsmount. All my windows servers connect via samba and squirt their incrementals into the moose. Even for the odd linux server which I need to create tarballs and send them off I still connect via cifs due to not having to install and configure another piece of software. For my use this is acceptable. I also have a metalogger server using NFS to re-export a different mfsmount directory. All my ESXi (which I am replacing with KVM) boxes connect to this whenever I need to archive any old vm's which serve no purpose anymore but management doesn't want to delete it. My environment is kinda weird though, we are a MS SPLA partner providing hosting for MS products, all running on free as in beer infrastructure underneath lol. |
From: wkmail <wk...@bn...> - 2012-05-18 17:46:03
|
On 5/17/2012 8:05 PM, Wenhua Zhang wrote: > Hi , > I think you can get the error code definition form file > mfscommon/MFSCommunication.h. > such as: > #define ERROR_WRONGVERSION 19 // Wrong chunk version > #define ERROR_DISCONNECTED 28 // Disconnected > > I hope this will do some help for you. > > Thank you, very helpful. Those I'm not sure what Wrong Chunk version implies and if it is a serious issue. Do I assume that the Disconnected occured because of the Wrong Chunk version? -bill |
From: Allen, B. S <bs...@la...> - 2012-05-18 17:03:42
|
It doesn't look like you protect against the chunk being updated via MFS (or anything else) between the time you do shutil.copy2 to the time you do the symlink. I see that you check mtime on the chunk, and check that its been at least 60 seconds since the last modification. In my opinion this rebalance technique is dangerous without someway guaranteeing the chunk can't be updated. Hacky way of ensuring there's no changes to the chunk until after it is copied: 1. Change the permissions of the live chunk to not be writable by MFS. Copy, move original, symlink back, fix permissions of the copy, remove original. 2. Maybe use chflags or similar to make the chunk file immutable during the copy. I have no idea what MFS will do if it tries to write to a non-writtable chunk file. Also instead of symlinks you could use bind mounting, which would eliminate the need for the patch. Although this would make a ton of mounts. Ideally this functionality should be built into mfschunkserver, since it obviously can stop itself from writing to a chunk thats being rebalanced. Ben On May 18, 2012, at 10:23 AM, Davies Liu wrote: On Fri, May 18, 2012 at 10:35 PM, Boris Epstein <bor...@gm...<mailto:bor...@gm...>> wrote: On Fri, May 18, 2012 at 10:31 AM, Davies Liu <dav...@gm...<mailto:dav...@gm...>> wrote: On Fri, May 18, 2012 at 9:36 PM, Boris Epstein <bor...@gm...<mailto:bor...@gm...>> wrote: Davies, How would you do that? Do you just copy the whole chunk storage directory? Yes,it do not matter which disk the chunk data was stored. If one disk has less free space, we could even move part of them to another. After a patch for chunk server, make a symlink for moved chunks, we could do this disk usage adjustment online, without shutting down chunk server. mfsrebalance.py, as attached, can do this automatically. But I think the task is to take the entire chunk server off line - then do you just move the content? Then how does the system know where to look for it? No,you do not need to shut down chunk server, after moving a chunk file, mfsrebalance.py will create a symlink for original path. If you just copy chunk data, you do not need shut down chunk server too. After restarting, the chunk server will now the new path for moved chunks. Boris. ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users -- - Davies ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2012-05-18 16:24:13
|
On Fri, May 18, 2012 at 10:35 PM, Boris Epstein <bor...@gm...> wrote: > > > On Fri, May 18, 2012 at 10:31 AM, Davies Liu <dav...@gm...> wrote: >> >> On Fri, May 18, 2012 at 9:36 PM, Boris Epstein <bor...@gm...> >> wrote: >> > Davies, >> > >> > How would you do that? Do you just copy the whole chunk storage >> > directory? >> >> Yes,it do not matter which disk the chunk data was stored. >> >> If one disk has less free space, we could even move part of them to >> another. >> >> After a patch for chunk server, make a symlink for moved chunks, we could >> do this disk usage adjustment online, without shutting down chunk server. >> mfsrebalance.py, as attached, can do this automatically. >> > > But I think the task is to take the entire chunk server off line - then do > you just move the content? Then how does the system know where to look for > it? No,you do not need to shut down chunk server, after moving a chunk file, mfsrebalance.py will create a symlink for original path. If you just copy chunk data, you do not need shut down chunk server too. After restarting, the chunk server will now the new path for moved chunks. > Boris. > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Allen, B. S <bs...@la...> - 2012-05-18 15:30:37
|
My chunkservers are on top of ZFS pools on Solaris. Using gzip-1 I get 2.32x, which is along the lines of the compress ratio I get with similar systems serving NFS. Note, my data is inherently well compressible. With 2x Intel X5675, load is never an issue. As you up the level of gzip you'll see a diminishing return, and pretty heavy hits on CPU load. I'd also suggest using ZFS for raid if you care about single stream performance. Serve up one or two big zpools per chunkserver to MFS. Keep in mind the size of your pool however, as having MFS fail off that HD will can take ages. Also of course you'll loose capacity in this approach to parity of RAIDZ or RAIDZ2, and then again to MFS' goal > 1 if you want high availability. If you're thinking of using ZFS, I'd highly suggest using one of the Illumos based OSes instead of FreeBSD or Linux variants. The Linux port is still pretty young in my opinion. I'd suggest Illumian, http://illumian.org/ which grew out of Nexenta Core. By the way MFS is the only distributed FS that I know of that compiles and runs well on Solaris. I've found small file performance isn't all that great in this setup. Sub what NFS can do on a similar ZFS pool, so I wouldn't get your hopes up much for it to solve this issue. You could perhaps throw a good amount of small SSD drives at ZFS' ZIL to improve synchronous write speeds, but when using ZIL you're funneling all your synchronous writes through the ZIL devices. So while using two SSDs will likely give you a touch better latency, it will kill your throughput compared to a full chassis of drives. I've also tested use of L2Arc on MLC SSDs for read cache. If its affordable for you, I'd suggest throwing RAM in the box for L1ARC instead. At least in my workload, I see very little L2Arc hits. Most hits (90%) comes from L1ARC in memory in my chunkservers that have 96GB. Next series of systems I buy will have 128G, and I'll cut my L2ARC SSDs to less than half them of my current systems( 2.4T -> 960G). I guessing I could actually remove L2ARC all together and not see a performance hit, but I haven't done enough benchmarking to prove that one way or another. Ben On May 17, 2012, at 2:22 PM, Steve Wilson wrote: On 05/17/2012 04:17 PM, Steve Wilson wrote: On 05/17/2012 04:05 PM, Atom Powers wrote: On 05/17/2012 12:44 PM, Steve Wilson wrote: On 05/17/2012 03:26 PM, Atom Powers wrote: * Compression, 1.16x in my environment I don't know if 1.16x would give me much improvement in performance. I typically see about 1.4x on my ZFS backup servers which made me think that this reduction in disk I/O could result in improved overall performance for MooseFS. Not for performance, for disk efficiency. Ostensibly those 64MiB chunks won't always use 64MiB with compression on, especially for smaller files. This is a good point and it might help where it's most needed: all those small configuration files, etc. that have a large impact on the user's perception of disk performance. Bad: * high RAM requirement Is the high RAM due to using raidz{2-3}? I was thinking of making each disk a separate ZFS volume and then letting MooseFS combine the disks into an MFS volume (i.e., no raidz). I realize that greater performance could be achieved by striping across disks in the chunk servers but I'm willing to trade off that performance gain for higher redundancy (in the case of using simple striping) and/or greater capacity (in the case of using raidz, raidz2, or raidz3). ZFS does a lot of caching in RAM. My chunk servers use hardware RAID, not raidz, and still use several hundred MiB of RAM. Personally, I would prefer to use raidz for muliple disks over MooseFS, because managing individual disks and disk failures should be much better. For example, to minimize the amount of re-balancing MooseFS needs to do; not to mention the possible performance benefit. But I can think of no reason why you couldn't do a combination of both. That is certainly worth considering. I hope to have enough time with the new chunk servers to try out different configurations before I have to put them into service. Steve ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boris E. <bor...@gm...> - 2012-05-18 14:35:08
|
On Fri, May 18, 2012 at 10:31 AM, Davies Liu <dav...@gm...> wrote: > On Fri, May 18, 2012 at 9:36 PM, Boris Epstein <bor...@gm...> > wrote: > > Davies, > > > > How would you do that? Do you just copy the whole chunk storage > directory? > > Yes,it do not matter which disk the chunk data was stored. > > If one disk has less free space, we could even move part of them to > another. > > After a patch for chunk server, make a symlink for moved chunks, we could > do this disk usage adjustment online, without shutting down chunk server. > mfsrebalance.py, as attached, can do this automatically. > > But I think the task is to take the entire chunk server off line - then do you just move the content? Then how does the system know where to look for it? Boris. |
From: Davies L. <dav...@gm...> - 2012-05-18 14:31:49
|
On Fri, May 18, 2012 at 9:36 PM, Boris Epstein <bor...@gm...> wrote: > Davies, > > How would you do that? Do you just copy the whole chunk storage directory? Yes,it do not matter which disk the chunk data was stored. If one disk has less free space, we could even move part of them to another. After a patch for chunk server, make a symlink for moved chunks, we could do this disk usage adjustment online, without shutting down chunk server. mfsrebalance.py, as attached, can do this automatically. > Boris. > > > On Fri, May 18, 2012 at 9:04 AM, Davies Liu <dav...@gm...> wrote: >> >> You can use rsync to copy chunk files from old one to new one, then stop >> the >> chunk server, rsync again, add new disc in mfshdd.cfg, remove old one, >> then start chunk server. >> >> On Thu, May 17, 2012 at 11:18 PM, Karol Pasternak >> <kar...@ar...> wrote: >> > Weird, in first order it tries to replicate undergoal files to new disk >> > instead of moving chunks from one disk to another. >> > >> > Any ideas? Is it possible to give higher priority to move chunks from >> > disk >> > marked for removal to new one in the same chunkserver? >> > >> > Of course I know that I can change goal to lower but maybe it's other >> > possibility.. . >> > >> > >> > On 05/17/2012 03:44 PM, Karol Pasternak wrote: >> > >> > Hi Steve, >> > >> > thanks for replay. You're right, all disk on this machines was full. >> > I thought, that when I remove this disk, this will replicate to others >> > chunkservers. >> > I added one more disk and it seems that it replicates correctly. >> > >> > Thanks for hint! >> > >> > On 05/17/2012 03:10 PM, Steve Wilson wrote: >> > >> > Hi Karol, >> > >> > I just replaced a 3TB disk that was about 85% full and it took about two >> > days for the chunks to be moved off it. How full are the other disks in >> > your chunk server? I assume there's enough space on them to accept the >> > chunks from the disk marked for removal. Did you restart the mfs-master >> > daemon after changing the mfsmaster.cfg variables? And did you restart >> > the >> > mfs-chunkserver daemon after marking the disk for removal? I'm sorry to >> > be >> > asking such obvious questions but sometimes it's easy to overlook the >> > obvious! >> > >> > Steve >> > >> > On 05/17/2012 08:20 AM, Karol Pasternak wrote: >> > >> > Hi, >> > >> > I've set one 500GB disk for removal 2 weeks ago. >> > After this time, it's still used 74% of it. >> > >> > Week ago I set: >> > CHUNKS_WRITE_REP_LIMIT = 15 >> > CHUNKS_READ_REP_LIMIT = 25 >> > >> > Why it takes so much long time? >> > >> > >> > br. >> > Karol >> > >> > >> > >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > Live Security Virtual Conference >> > Exclusive live event will cover all the ways today's security and >> > threat landscape has changed and how IT managers can respond. >> > Discussions >> > will include endpoint security, mobile security and the latest in >> > malware >> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> > >> > >> > >> > _______________________________________________ >> > moosefs-users mailing list >> > moo...@li... >> > https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > >> > >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > Live Security Virtual Conference >> > Exclusive live event will cover all the ways today's security and >> > threat landscape has changed and how IT managers can respond. >> > Discussions >> > will include endpoint security, mobile security and the latest in >> > malware >> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> > >> > >> > >> > _______________________________________________ >> > moosefs-users mailing list >> > moo...@li... >> > https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > >> > >> > >> > >> > >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > Live Security Virtual Conference >> > Exclusive live event will cover all the ways today's security and >> > threat landscape has changed and how IT managers can respond. >> > Discussions >> > will include endpoint security, mobile security and the latest in >> > malware >> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> > >> > >> > >> > _______________________________________________ >> > moosefs-users mailing list >> > moo...@li... >> > https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > >> > >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > Live Security Virtual Conference >> > Exclusive live event will cover all the ways today's security and >> > threat landscape has changed and how IT managers can respond. >> > Discussions >> > will include endpoint security, mobile security and the latest in >> > malware >> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> > _______________________________________________ >> > moosefs-users mailing list >> > moo...@li... >> > https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > >> >> >> >> -- >> - Davies >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- - Davies |
From: yishi c. <hol...@gm...> - 2012-05-18 14:31:15
|
hi : the detail of this patch is like this: old: syslog(LOG_NOTICE,"(%s:%"PRIu16") chunk: %016"PRIX64" creation status: %"PRIu8,eptr->servstrip,eptr->servport,chunkid,status); new: log_got_chunk_checksum %= LOG_COUNT; if (log_got_chunk_checksum++ == 0) { syslog(LOG_NOTICE,"(%s:%"PRIu16") chunk: %016"PRIX64" calculate checksum status: %"PRIu8,eptr->servstrip,eptr->servport,chunkid,status); } as you see ,the sametype of syslog will be printed only 1/n ,the n can be configured in the mfsmaster.cfg. all the code is based on 1.6.17 2012/5/11 yishi cheng <hol...@gm...> > hi mfs-dev-team: > I've found a small bug of moosefs:when there is a > chunkserver with a lot of chunks(for instance,abount one million > chunks),connected to a master which has no information about these > chunks.the master will assign a new chunkid for each chunk,and call the > syslog() for each chunk at the same time.that is quite a big cost.for my > server(12 cores cpu),it can print about less than 150 syslog per second,for > more than one million chunks it will take about two hours to handle this > "new chunkserver". > During this two hours,the *master can't handle the > normal request of mfsmount*.if we have more than one MFS cluster,or the > chunkserver has been disconnected with the master for a certain time that > the master has already deleted all the information of these chunks.when the > chunkserver connected to a diffrent master or reconnect to the master after > a long time.*this bug will make the MFS system down with the normal > request for hours*. > the same kind of problem may occured under the > circumstance like manually delete a lot of chunks of chunkserver,and *something > similar with do some thing wrong with the chunks* .the master will print > all the information of these chunks with syslog(). > I fixed these bugs and added a configuration about > how many times we will reduce the log numbers,if will config this with > 1000,the master will print 1/1000 of the normal log. > the modified code is in the mail‘s attachment.inmy perspective ,this bug is critical for online service,hope you can let > users know about the risk of this. > > best wished and good luck. > > yours tony cheng > > 2012/5/11 <moo...@li...> > > Send moosefs-users mailing list submissions to >> moo...@li... >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> or, via email, send a message with subject or body 'help' to >> moo...@li... >> >> You can reach the person managing the list at >> moo...@li... >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of moosefs-users digest..." >> >> >> Today's Topics: >> >> 1. Re: does MooseFS support incremental backup for data? (????) >> 2. spindles or chunkservers for performance? (wkmail) >> 3. bundle open source [was: Solution of small file store] (Ken) >> 4. Re: bundle open source [was: Solution of small file store] >> (Ken) >> 5. Re: bundle open source [was: Solution of small file store] >> (Dennis Jacobfeuerborn) >> 6. Re: fox and bird (Steve Thompson) >> 7. Moosefs git source tree not updated (Anh K. Huynh) >> 8. Re: fox and bird (Dr. Michael J. Chudobiak) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Wed, 9 May 2012 09:45:25 +0800 >> From: ???? <toa...@gm...> >> Subject: Re: [Moosefs-users] does MooseFS support incremental backup >> for data? >> To: ??? <shu...@gm...> >> Cc: moo...@li... >> Message-ID: >> <CADX6XXVF5vUEjiHKtCHPMoBF9K8j-X8XFbbJ5QX2AE7gr= >> qD...@ma...> >> Content-Type: text/plain; charset="gb2312" >> >> use moosefs's delay delete can fit what you want. >> >> 2012/4/24 ??? <shu...@gm...> >> >> > hi, >> > MooseFS is efficent file system for distributing. From Q&A on >> > www.moosefs.org, I know moosefs support snapshot feature. >> > I want to know whether if moose suppport incremental backup for data >> > in chunkdata server? If not support, will it be added int the future? >> > >> > 3x >> > >> > Good lucky >> > >> > >> > >> ------------------------------------------------------------------------------ >> > Live Security Virtual Conference >> > Exclusive live event will cover all the ways today's security and >> > threat landscape has changed and how IT managers can respond. >> Discussions >> > will include endpoint security, mobile security and the latest in >> malware >> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> > _______________________________________________ >> > moosefs-users mailing list >> > moo...@li... >> > https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > >> > >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> >> ------------------------------ >> >> Message: 2 >> Date: Wed, 09 May 2012 12:11:58 -0700 >> From: wkmail <wk...@bn...> >> Subject: [Moosefs-users] spindles or chunkservers for performance? >> To: moo...@li... >> Message-ID: <4FA...@bn...> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> We run several MFS clusters, mostly for data storage but we have been >> pleased with their use in Email Server clusters where despite the >> Storage penalty (the 64K chunks multiplying the storage size used) >> performance has been quite good and compares to other solutions we have >> tried (replicated NFS etc) with much easier maintenance. Our feeling is >> that hard drives are still cheap (despite the Asian flood) and we have >> lots of older kit/drives floating around in the DC. >> >> We currently have a 4 chunkserver setup that due to growth is beginning >> to slow down (7+ million files now). >> >> Each CS has a single SATA 1TB drive. Goal is set to 3. >> >> Would we be better off adding additional chunkservers and thus spreading >> the read/writes out over more CS machines? >> >> or would simply adding additional drives to the existing Chunkservers >> achieve the same thing (or close to the same thing) due to the writes >> being spread over more spindles. >> >> On this list I recall previous recommendations that going more than 4 >> spindles per CS was problematic due to limits in the software for the >> number of CS connections, but >> in this case we are starting with only 1 drive apiece now and we >> certainly have a lot of opportunity to grow (and with the lower end kit >> we use for chunkservers they probably only >> have 2-4 SATA ports anyway). >> >> Thank You. >> >> -bill >> >> >> >> >> >> ------------------------------ >> >> Message: 3 >> Date: Thu, 10 May 2012 19:17:05 +0800 >> From: Ken <ken...@gm...> >> Subject: [Moosefs-users] bundle open source [was: Solution of small >> file store] >> To: moosefs-users <moo...@li...> >> Message-ID: >> < >> CAJ...@ma...> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> hi, all >> >> As mention in previous mail >> (http://sf.net/mailarchive/message.php?msg_id=29171206), >> now we open source it - bundle >> >> https://github.com/xiaonei/bundle >> >> The source is well tested and documented. >> >> Demo: >> http://60.29.242.206/demo.html >> >> >> Any ideas is appreciated. >> >> -Ken >> >> >> >> ------------------------------ >> >> Message: 4 >> Date: Thu, 10 May 2012 19:51:37 +0800 >> From: Ken <ken...@gm...> >> Subject: Re: [Moosefs-users] bundle open source [was: Solution of >> small file store] >> To: moosefs-users <moo...@li...> >> Message-ID: >> <CAJbVbdLDiHZLzwf07L= >> v_W...@ma...> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> A fast Demo: >> http://220.181.180.55/demo.html >> >> >> -Ken >> >> >> On Thu, May 10, 2012 at 7:17 PM, Ken <ken...@gm...> wrote: >> > hi, all >> > >> > As mention in previous mail >> > (http://sf.net/mailarchive/message.php?msg_id=29171206), >> > now we open source it - bundle >> > >> > ?https://github.com/xiaonei/bundle >> > >> > The source is well tested and documented. >> > >> > Demo: >> > ?http://60.29.242.206/demo.html >> > >> > >> > Any ideas is appreciated. >> > >> > -Ken >> >> >> >> ------------------------------ >> >> Message: 5 >> Date: Thu, 10 May 2012 15:39:56 +0200 >> From: Dennis Jacobfeuerborn <den...@co...> >> Subject: Re: [Moosefs-users] bundle open source [was: Solution of >> small file store] >> To: moo...@li... >> Message-ID: <4FA...@co...> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> On 05/10/2012 01:17 PM, Ken wrote: >> > hi, all >> > >> > As mention in previous mail >> > (http://sf.net/mailarchive/message.php?msg_id=29171206), >> > now we open source it - bundle >> > >> > https://github.com/xiaonei/bundle >> > >> > The source is well tested and documented. >> > >> > Demo: >> > http://60.29.242.206/demo.html >> > >> >> Looks really interesting, thanks! >> You should probably add a license file to make it clear what the >> conditions >> for its use are. >> >> Regards, >> Dennis >> >> >> >> ------------------------------ >> >> Message: 6 >> Date: Thu, 10 May 2012 17:28:50 -0400 (EDT) >> From: Steve Thompson <sm...@cb...> >> Subject: Re: [Moosefs-users] fox and bird >> To: moo...@li... >> Message-ID: >> <alp...@as...> >> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed >> >> On Thu, 29 Mar 2012, Steve Thompson wrote: >> >> > I have further found that Thunderbird works well, but Firefox is so >> > painfully slow (glacial) as to be unusable. For the time being, I have >> had >> > to relocate the .mozilla directories to a non-MFS file system and >> replaced >> > them by symbolic links. >> >> Now I have upgraded to 1.6.25, have emptied MFS completely apart from my >> .mozilla directory. There are now four dedicated chunkservers with a total >> of 20TB of SATA RAID-5 file systems formatted with ext4, and all four are >> connected to a common HP Procurve switch using dual bonded balance-alb >> gigabit links, dedicated to MFS, with MTU=1500. The master is running on >> one of the chunkservers. "hdparm -t" gives me about 400 MB/sec. >> >> Firefox is still so painfully slow as to be unuseable. It takes something >> like 30-45 minutes to start firefox, and several minutes to click on a >> link. With .mozilla in an NFS mounted file system from the same disks, >> firefox starts immediately, so it doesn't look like hardware. >> >> Copying a large file into MFS gets me something like 80-85 MB/sec >> (physically twice that with goal=2) so I am at a loss to explain the >> dismal performance with firefox. I could really use some ideas, as I have >> no idea where to go next. >> >> Steve >> >> >> >> ------------------------------ >> >> Message: 7 >> Date: Fri, 11 May 2012 13:50:31 +0700 >> From: "Anh K. Huynh" <anh...@gm...> >> Subject: [Moosefs-users] Moosefs git source tree not updated >> To: moo...@li... >> Message-ID: <201...@gm...> >> Content-Type: text/plain; charset=US-ASCII >> >> Hello, >> >> Though MFS has released some new versions, I don't see any updates in >> the git repository >> >> >> http://moosefs.git.sourceforge.net/git/gitweb.cgi?p=moosefs/moosefs;a=summary >> >> Is the git repository located at another place and/or is there anything >> wrong here? >> >> Thanks & Regards, >> >> -- >> Anh K. Huynh >> ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#" >> `CB]V?Tx<uVtT`Rpo3NlF.Jh++FdbCBA@?]!~|4XzyTT43Qsqq(Lnmkj"Fhg${z@> >> >> >> >> ------------------------------ >> >> Message: 8 >> Date: Fri, 11 May 2012 05:36:14 -0400 >> From: "Dr. Michael J. Chudobiak" <mj...@av...> >> Subject: Re: [Moosefs-users] fox and bird >> To: moo...@li... >> Message-ID: <4FA...@av...> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> On 05/10/2012 05:28 PM, Steve Thompson wrote: >> > On Thu, 29 Mar 2012, Steve Thompson wrote: >> > >> >> I have further found that Thunderbird works well, but Firefox is so >> >> painfully slow (glacial) as to be unusable. For the time being, I have >> had >> >> to relocate the .mozilla directories to a non-MFS file system and >> replaced >> >> them by symbolic links. >> ... >> > Copying a large file into MFS gets me something like 80-85 MB/sec >> > (physically twice that with goal=2) so I am at a loss to explain the >> > dismal performance with firefox. I could really use some ideas, as I >> have >> > no idea where to go next. >> >> I would focus on the sqlite files that firefox uses. sqlite is notorious >> for causing problems on remote filesystems (particularly NFS). >> "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). >> >> Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps >> call fsync much more frequently than others. >> >> - Mike >> >> >> >> ------------------------------ >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> >> ------------------------------ >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> End of moosefs-users Digest, Vol 29, Issue 7 >> ******************************************** >> > > |
From: Boris E. <bor...@gm...> - 2012-05-18 13:36:48
|
Davies, How would you do that? Do you just copy the whole chunk storage directory? Boris. On Fri, May 18, 2012 at 9:04 AM, Davies Liu <dav...@gm...> wrote: > You can use rsync to copy chunk files from old one to new one, then stop > the > chunk server, rsync again, add new disc in mfshdd.cfg, remove old one, > then start chunk server. > > On Thu, May 17, 2012 at 11:18 PM, Karol Pasternak > <kar...@ar...> wrote: > > Weird, in first order it tries to replicate undergoal files to new disk > > instead of moving chunks from one disk to another. > > > > Any ideas? Is it possible to give higher priority to move chunks from > disk > > marked for removal to new one in the same chunkserver? > > > > Of course I know that I can change goal to lower but maybe it's other > > possibility.. . > > > > > > On 05/17/2012 03:44 PM, Karol Pasternak wrote: > > > > Hi Steve, > > > > thanks for replay. You're right, all disk on this machines was full. > > I thought, that when I remove this disk, this will replicate to others > > chunkservers. > > I added one more disk and it seems that it replicates correctly. > > > > Thanks for hint! > > > > On 05/17/2012 03:10 PM, Steve Wilson wrote: > > > > Hi Karol, > > > > I just replaced a 3TB disk that was about 85% full and it took about two > > days for the chunks to be moved off it. How full are the other disks in > > your chunk server? I assume there's enough space on them to accept the > > chunks from the disk marked for removal. Did you restart the mfs-master > > daemon after changing the mfsmaster.cfg variables? And did you restart > the > > mfs-chunkserver daemon after marking the disk for removal? I'm sorry to > be > > asking such obvious questions but sometimes it's easy to overlook the > > obvious! > > > > Steve > > > > On 05/17/2012 08:20 AM, Karol Pasternak wrote: > > > > Hi, > > > > I've set one 500GB disk for removal 2 weeks ago. > > After this time, it's still used 74% of it. > > > > Week ago I set: > > CHUNKS_WRITE_REP_LIMIT = 15 > > CHUNKS_READ_REP_LIMIT = 25 > > > > Why it takes so much long time? > > > > > > br. > > Karol > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > > > > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > > > > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > > > > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > -- > - Davies > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Davies L. <dav...@gm...> - 2012-05-18 13:04:43
|
You can use rsync to copy chunk files from old one to new one, then stop the chunk server, rsync again, add new disc in mfshdd.cfg, remove old one, then start chunk server. On Thu, May 17, 2012 at 11:18 PM, Karol Pasternak <kar...@ar...> wrote: > Weird, in first order it tries to replicate undergoal files to new disk > instead of moving chunks from one disk to another. > > Any ideas? Is it possible to give higher priority to move chunks from disk > marked for removal to new one in the same chunkserver? > > Of course I know that I can change goal to lower but maybe it's other > possibility.. . > > > On 05/17/2012 03:44 PM, Karol Pasternak wrote: > > Hi Steve, > > thanks for replay. You're right, all disk on this machines was full. > I thought, that when I remove this disk, this will replicate to others > chunkservers. > I added one more disk and it seems that it replicates correctly. > > Thanks for hint! > > On 05/17/2012 03:10 PM, Steve Wilson wrote: > > Hi Karol, > > I just replaced a 3TB disk that was about 85% full and it took about two > days for the chunks to be moved off it. How full are the other disks in > your chunk server? I assume there's enough space on them to accept the > chunks from the disk marked for removal. Did you restart the mfs-master > daemon after changing the mfsmaster.cfg variables? And did you restart the > mfs-chunkserver daemon after marking the disk for removal? I'm sorry to be > asking such obvious questions but sometimes it's easy to overlook the > obvious! > > Steve > > On 05/17/2012 08:20 AM, Karol Pasternak wrote: > > Hi, > > I've set one 500GB disk for removal 2 weeks ago. > After this time, it's still used 74% of it. > > Week ago I set: > CHUNKS_WRITE_REP_LIMIT = 15 > CHUNKS_READ_REP_LIMIT = 25 > > Why it takes so much long time? > > > br. > Karol > > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- - Davies |
From: Raymond J. <ray...@ca...> - 2012-05-18 01:12:27
|
On 5/17/2012 12:26 PM, Atom Powers wrote: > I use ZFS on FreeBSD, which is one of the main reasons I use FreeBSD on > my chunk servers. We also use FreeBSD here, due to the fact that ZFS works quite well at our scale (10-20TB zpools, 8 disks/zpool, one mfschunkserver invocation per zpool). Our current setup is to have cheap commodity nodes that boot FreeBSD disklessly. We can easily provision more storage (it's about 10 commands and 30 minutes; we don't expand fast enough to automate everything). Admittedly, we have a small cluster (60TB), but we haven't run into scaling issues so far. > > Good: > * Compression, 1.16x in my environment > * zraid > * probably improved performance (I haven't done a comparison on MooseFS > but saw better performance over UFS for "standard" file system use) > * Easy to carve up for other uses on the same server ZFS makes disk management a pleasure, compared to UFS. With labeled GPT partitions and FreeBSD's built in GPT name support, it makes administration a breeze. Unfortunately most of our computing applications require a homogenous Linux computing environment, so our storage nodes don't see other use. > Bad: > * high RAM requirement RAM is cheap, and without deduplication/compression ZFS only takes as much RAM as you tell it to (all of it by default, but it normally lies unused on our storage nodes anyway). > Ugly: > * FreeBSD is tricky to build with bootable ZFS > * Linux ZFS is FUSE. We tried the Linux LLNL port when it came out back at the beginning of last year, but the number of metadata lookups vs. data lookups was killing ZFS performance and was causing instability. This was on a smaller cluster of about 30TB, so we didn't scale up. Combined with the better documentation and administrative features of FreeBSD, ZFS was a no-brainer for us. Raymond Jimenez |
From: Quenten G. <QG...@on...> - 2012-05-17 22:45:07
|
Hey Atom, Linux ZFS isn't all fuse, yes there is a ZFS fuse implementation but also a kernel module implementation by zfsonlinux.org. Enjoy! Regards, Quenten -----Original Message----- From: Atom Powers [mailto:ap...@di...] Sent: Friday, 18 May 2012 5:27 AM To: moo...@li... Subject: Re: [Moosefs-users] Best FileSystem for chunkers. On 05/17/2012 11:56 AM, Steve Wilson wrote: > I'd like to know if anyone has tried a file system with compression like > ZFS on Linux. Some have mentioned that it might yield a performance > improvement. I might give it a try on our next pair of chunk servers > but it would be good to know if anyone else has gone this route before > and what their experience has been. I use ZFS on FreeBSD, which is one of the main reasons I use FreeBSD on my chunk servers. Good: * Compression, 1.16x in my environment * zraid * probably improved performance (I haven't done a comparison on MooseFS but saw better performance over UFS for "standard" file system use) * Easy to carve up for other uses on the same server Bad: * high RAM requirement Ugly: * FreeBSD is tricky to build with bootable ZFS * Linux ZFS is FUSE. -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Elliot F. <efi...@gm...> - 2012-05-17 22:43:13
|
We tried using FreeBSD/ZFS with one disk per zpool just for simplicity sake. ZFS is unable to correct bit errors in this configuration, so if you get a bit error, the pool goes offline. MFS doesn't like that at all. :) We ended up using good old UFS2. On Thu, May 17, 2012 at 1:44 PM, Steve Wilson <st...@pu...> wrote: > On 05/17/2012 03:26 PM, Atom Powers wrote: >> On 05/17/2012 11:56 AM, Steve Wilson wrote: >>> I'd like to know if anyone has tried a file system with compression like >>> ZFS on Linux. Some have mentioned that it might yield a performance >>> improvement. I might give it a try on our next pair of chunk servers >>> but it would be good to know if anyone else has gone this route before >>> and what their experience has been. >> I use ZFS on FreeBSD, which is one of the main reasons I use FreeBSD on >> my chunk servers. >> >> Good: >> * Compression, 1.16x in my environment > I don't know if 1.16x would give me much improvement in performance. I > typically see about 1.4x on my ZFS backup servers which made me think > that this reduction in disk I/O could result in improved overall > performance for MooseFS. >> * zraid >> * probably improved performance (I haven't done a comparison on MooseFS >> but saw better performance over UFS for "standard" file system use) >> * Easy to carve up for other uses on the same server >> >> Bad: >> * high RAM requirement > Is the high RAM due to using raidz{2-3}? I was thinking of making each > disk a separate ZFS volume and then letting MooseFS combine the disks > into an MFS volume (i.e., no raidz). I realize that greater performance > could be achieved by striping across disks in the chunk servers but I'm > willing to trade off that performance gain for higher redundancy (in the > case of using simple striping) and/or greater capacity (in the case of > using raidz, raidz2, or raidz3). > >> >> Ugly: >> * FreeBSD is tricky to build with bootable ZFS >> * Linux ZFS is FUSE. > > If someone is using Linux, I would definitely recommend the ZFS native > on Linux port (http://zfsonlinux.org/) rather than the FUSE version of > ZFS. I've been using it for my backup servers for about six months now > and with very good success. > > Steve > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve W. <st...@pu...> - 2012-05-17 20:22:21
|
On 05/17/2012 04:17 PM, Steve Wilson wrote: > On 05/17/2012 04:05 PM, Atom Powers wrote: >> On 05/17/2012 12:44 PM, Steve Wilson wrote: >>> On 05/17/2012 03:26 PM, Atom Powers wrote: >>>> * Compression, 1.16x in my environment >>> I don't know if 1.16x would give me much improvement in performance. >>> I typically see about 1.4x on my ZFS backup servers which made me >>> think that this reduction in disk I/O could result in improved >>> overall performance for MooseFS. >> Not for performance, for disk efficiency. Ostensibly those 64MiB chunks >> won't always use 64MiB with compression on, especially for smaller >> files. > > This is a good point and it might help where it's most needed: all > those small configuration files, etc. that have a large impact on the > user's perception of disk performance. > >>>> Bad: * high RAM requirement >>> Is the high RAM due to using raidz{2-3}? I was thinking of making >>> each disk a separate ZFS volume and then letting MooseFS combine the >>> disks into an MFS volume (i.e., no raidz). I realize that greater >>> performance could be achieved by striping across disks in the chunk >>> servers but I'm willing to trade off that performance gain for >>> higher redundancy (in the case of using simple striping) and/or >>> greater capacity (in the case of using raidz, raidz2, or raidz3). >> ZFS does a lot of caching in RAM. My chunk servers use hardware RAID, >> not raidz, and still use several hundred MiB of RAM. >> >> Personally, I would prefer to use raidz for muliple disks over MooseFS, >> because managing individual disks and disk failures should be much >> better. For example, to minimize the amount of re-balancing MooseFS >> needs to do; not to mention the possible performance benefit. But I can >> think of no reason why you couldn't do a combination of both. >> > > That is certainly worth considering. I hope to have enough time with > the new chunk servers to try out different configurations before I > have to put them into service. > > Steve |
From: Atom P. <ap...@di...> - 2012-05-17 20:05:55
|
On 05/17/2012 12:44 PM, Steve Wilson wrote: > On 05/17/2012 03:26 PM, Atom Powers wrote: >> * Compression, 1.16x in my environment > I don't know if 1.16x would give me much improvement in performance. > I typically see about 1.4x on my ZFS backup servers which made me > think that this reduction in disk I/O could result in improved > overall performance for MooseFS. Not for performance, for disk efficiency. Ostensibly those 64MiB chunks won't always use 64MiB with compression on, especially for smaller files. Most of my files in the Moose cluster are quite large; I get 1.86x compression on the mail store, which I'm considering moving to MooseFS. >> Bad: * high RAM requirement > Is the high RAM due to using raidz{2-3}? I was thinking of making > each disk a separate ZFS volume and then letting MooseFS combine the > disks into an MFS volume (i.e., no raidz). I realize that greater > performance could be achieved by striping across disks in the chunk > servers but I'm willing to trade off that performance gain for > higher redundancy (in the case of using simple striping) and/or > greater capacity (in the case of using raidz, raidz2, or raidz3). ZFS does a lot of caching in RAM. My chunk servers use hardware RAID, not raidz, and still use several hundred MiB of RAM. Personally, I would prefer to use raidz for muliple disks over MooseFS, because managing individual disks and disk failures should be much better. For example, to minimize the amount of re-balancing MooseFS needs to do; not to mention the possible performance benefit. But I can think of no reason why you couldn't do a combination of both. -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Steve W. <st...@pu...> - 2012-05-17 19:44:52
|
On 05/17/2012 03:26 PM, Atom Powers wrote: > On 05/17/2012 11:56 AM, Steve Wilson wrote: >> I'd like to know if anyone has tried a file system with compression like >> ZFS on Linux. Some have mentioned that it might yield a performance >> improvement. I might give it a try on our next pair of chunk servers >> but it would be good to know if anyone else has gone this route before >> and what their experience has been. > I use ZFS on FreeBSD, which is one of the main reasons I use FreeBSD on > my chunk servers. > > Good: > * Compression, 1.16x in my environment I don't know if 1.16x would give me much improvement in performance. I typically see about 1.4x on my ZFS backup servers which made me think that this reduction in disk I/O could result in improved overall performance for MooseFS. > * zraid > * probably improved performance (I haven't done a comparison on MooseFS > but saw better performance over UFS for "standard" file system use) > * Easy to carve up for other uses on the same server > > Bad: > * high RAM requirement Is the high RAM due to using raidz{2-3}? I was thinking of making each disk a separate ZFS volume and then letting MooseFS combine the disks into an MFS volume (i.e., no raidz). I realize that greater performance could be achieved by striping across disks in the chunk servers but I'm willing to trade off that performance gain for higher redundancy (in the case of using simple striping) and/or greater capacity (in the case of using raidz, raidz2, or raidz3). > > Ugly: > * FreeBSD is tricky to build with bootable ZFS > * Linux ZFS is FUSE. If someone is using Linux, I would definitely recommend the ZFS native on Linux port (http://zfsonlinux.org/) rather than the FUSE version of ZFS. I've been using it for my backup servers for about six months now and with very good success. Steve |
From: Atom P. <ap...@di...> - 2012-05-17 19:26:49
|
On 05/17/2012 11:56 AM, Steve Wilson wrote: > I'd like to know if anyone has tried a file system with compression like > ZFS on Linux. Some have mentioned that it might yield a performance > improvement. I might give it a try on our next pair of chunk servers > but it would be good to know if anyone else has gone this route before > and what their experience has been. I use ZFS on FreeBSD, which is one of the main reasons I use FreeBSD on my chunk servers. Good: * Compression, 1.16x in my environment * zraid * probably improved performance (I haven't done a comparison on MooseFS but saw better performance over UFS for "standard" file system use) * Easy to carve up for other uses on the same server Bad: * high RAM requirement Ugly: * FreeBSD is tricky to build with bootable ZFS * Linux ZFS is FUSE. -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Steve W. <st...@pu...> - 2012-05-17 18:56:12
|
On 05/16/2012 05:34 PM, wkmail wrote: > We are building another MFS cluster. > > What is the current thought regarding File systems for Chunkers using > older retired Opteron 250 servers (SATA2) each with 2GB RAM. > > We would use Scientific Linux 6.x which defaults to EXT4 so that is the > first thing to come to mind, but reviews are mixed. > > Some really push EXT3 > > http://source.yeeyan.org/view/353552_f0d/Linux%20filesystems%20benchmarked:%20EXT3%20vs%20EXT4%20vs%20XFS%20vs%20BTRFS > > and since the chunkserver environment is decidedly single purpose we are > leaning in that direction. > > Note: we aren't terribly excited about XFS as we have had some issues > with that in the past, but if someone feels strongly about it... > > Sincerely, > > -wk > We have used both ext4 and XFS with equal success. ext4 has more overhead than XFS which on our chunk servers (8 x 3TB disks) causes the ext4 chunk servers to report 21TB of space while the XFS chunk servers report 22TB of space. It's hard to give up a terabyte of space! I'd like to know if anyone has tried a file system with compression like ZFS on Linux. Some have mentioned that it might yield a performance improvement. I might give it a try on our next pair of chunk servers but it would be good to know if anyone else has gone this route before and what their experience has been. Steve |
From: Karol P. <kar...@ar...> - 2012-05-17 15:18:56
|
Weird, in first order it tries to replicate undergoal files to new disk instead of moving chunks from one disk to another. Any ideas? Is it possible to give higher priority to move chunks from disk marked for removal to new one in the same chunkserver? Of course I know that I can change goal to lower but maybe it's other possibility.. . On 05/17/2012 03:44 PM, Karol Pasternak wrote: > Hi Steve, > > thanks for replay. You're right, all disk on this machines was full. > I thought, that when I remove this disk, this will replicate to others > chunkservers. > I added one more disk and it seems that it replicates correctly. > > Thanks for hint! > > On 05/17/2012 03:10 PM, Steve Wilson wrote: >> Hi Karol, >> >> I just replaced a 3TB disk that was about 85% full and it took about >> two days for the chunks to be moved off it. How full are the other >> disks in your chunk server? I assume there's enough space on them to >> accept the chunks from the disk marked for removal. Did you restart >> the mfs-master daemon after changing the mfsmaster.cfg variables? >> And did you restart the mfs-chunkserver daemon after marking the disk >> for removal? I'm sorry to be asking such obvious questions but >> sometimes it's easy to overlook the obvious! >> >> Steve >> >> On 05/17/2012 08:20 AM, Karol Pasternak wrote: >>> Hi, >>> >>> I've set one 500GB disk for removal 2 weeks ago. >>> After this time, it's still used 74% of it. >>> >>> Week ago I set: >>> CHUNKS_WRITE_REP_LIMIT = 15 >>> CHUNKS_READ_REP_LIMIT = 25 >>> >>> Why it takes so much long time? >>> >>> >>> br. >>> Karol >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Live Security Virtual Conference >>> Exclusive live event will cover all the ways today's security and >>> threat landscape has changed and how IT managers can respond. Discussions >>> will include endpoint security, mobile security and the latest in malware >>> threats.http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >>> >>> >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats.http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Karol P. <kar...@ar...> - 2012-05-17 13:44:39
|
Hi Steve, thanks for replay. You're right, all disk on this machines was full. I thought, that when I remove this disk, this will replicate to others chunkservers. I added one more disk and it seems that it replicates correctly. Thanks for hint! On 05/17/2012 03:10 PM, Steve Wilson wrote: > Hi Karol, > > I just replaced a 3TB disk that was about 85% full and it took about > two days for the chunks to be moved off it. How full are the other > disks in your chunk server? I assume there's enough space on them to > accept the chunks from the disk marked for removal. Did you restart > the mfs-master daemon after changing the mfsmaster.cfg variables? And > did you restart the mfs-chunkserver daemon after marking the disk for > removal? I'm sorry to be asking such obvious questions but sometimes > it's easy to overlook the obvious! > > Steve > > On 05/17/2012 08:20 AM, Karol Pasternak wrote: >> Hi, >> >> I've set one 500GB disk for removal 2 weeks ago. >> After this time, it's still used 74% of it. >> >> Week ago I set: >> CHUNKS_WRITE_REP_LIMIT = 15 >> CHUNKS_READ_REP_LIMIT = 25 >> >> Why it takes so much long time? >> >> >> br. >> Karol >> >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats.http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Pozdrawiam/Best Regards *Karol Pasternak **Linux Administrator | ARBOmedia* *_ka...@ar..._*** T +48 22 592 49 95 | M +48 785 555 184 * ARBOmedia Polska | *Altowa 2 | 02-386 Warszawa* www.arbomedia.pl* <http://www.arbomedia.pl/>** | T +48 22 592 45 00 | F 22 448 71 63 ARBOmedia Polska Sp. z o.o., zarejestrowana w Sa;dzie Rejonowym dla m.st. Warszawy w Warszawie, XII Wydzia? Gospodarczy Krajowego Rejestru Sa;dowego, pod numerem KRS: 0000181362, kapita? zak?adowy wynosi 600 000 z?, NIP: 527-23-12-331 |
From: Steve W. <st...@pu...> - 2012-05-17 13:11:00
|
Hi Karol, I just replaced a 3TB disk that was about 85% full and it took about two days for the chunks to be moved off it. How full are the other disks in your chunk server? I assume there's enough space on them to accept the chunks from the disk marked for removal. Did you restart the mfs-master daemon after changing the mfsmaster.cfg variables? And did you restart the mfs-chunkserver daemon after marking the disk for removal? I'm sorry to be asking such obvious questions but sometimes it's easy to overlook the obvious! Steve On 05/17/2012 08:20 AM, Karol Pasternak wrote: > Hi, > > I've set one 500GB disk for removal 2 weeks ago. > After this time, it's still used 74% of it. > > Week ago I set: > CHUNKS_WRITE_REP_LIMIT = 15 > CHUNKS_READ_REP_LIMIT = 25 > > Why it takes so much long time? > > > br. > Karol > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Karol P. <kar...@ar...> - 2012-05-17 12:20:53
|
Hi, I've set one 500GB disk for removal 2 weeks ago. After this time, it's still used 74% of it. Week ago I set: CHUNKS_WRITE_REP_LIMIT = 15 CHUNKS_READ_REP_LIMIT = 25 Why it takes so much long time? br. Karol |