From: wkmail <wk...@bn...> - 2012-05-03 20:32:51
|
That is supposed to be fixed in 1.6.25. You can now set the deletion limits in the chunkserver config file. But in 1.6.20 you can only modify the mfsmaster source code. I wrote a patch for that earlier in the year when I ran into that issue. but I thinks its best that you just upgrade to 1.6.25 That being said. We still use a slow_del.pl program around here that does an unlink once every second, whenever we have to kill big folders. You just background it and let go for a few days and everybody is much happier. BTW, if you think big deletes on MooseFS are problematic (and they no longer are), you should try an Active/Active DRBD setup. That just grinds to a halt period. -bill On 5/3/2012 1:19 PM, Steve Thompson wrote: > MooseFS 1.6.20, CentOS 5.7. > > File deletion in MFS is very fast as we all know. A couple of days ago, we > had a mass deletion of just over 11 million files occupying about 5TB. > After the elapse of the trash time of 24 hours, MFS set about removing all > of the deleted chunks. And with a fervour; it devoted itself entirely to > this task, whereupon the speed of normal file system operations dropped to > much less than 1% of its former performance; it became unusable until the > last of the 11 million was gone, whereupon performance returned to normal. > Are there any tuanbles or best practices (I know, don't delete 11 million > files at once) that can influence this? > > Steve > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |