From: Thomas S H. <tha...@gm...> - 2011-02-28 23:45:44
|
I have run into a problem with Moosefs when I delete a LOT of files at once, in my environment we add hundreds of thousands of files, process them into smaller encoded files, and then delete the originals, when we delete the originals the deletion process can lock up our writes, and dramatically slows down our moosefs. Is there a way to make large scale deletions behave a little more nicely? -Thomas S Hatch |
From: Michal B. <mic...@ge...> - 2011-03-01 11:43:14
|
Hi Thomas! Do you use ext3fs? Deleting files on this system is on its own very very slow. And we assume there is also some mechanism blocking other operations on disk while deleting built into kernel. These are things we cannot do anything about it. On the MooseFS side there are really some deletion limits which depend on the number of chunks necessary to delete. We'll think of introducing some option to configure this top limit. Regards -Michal From: Thomas S Hatch [mailto:tha...@gm...] Sent: Tuesday, March 01, 2011 12:46 AM To: moosefs-users Subject: [Moosefs-users] Large scale deletions I have run into a problem with Moosefs when I delete a LOT of files at once, in my environment we add hundreds of thousands of files, process them into smaller encoded files, and then delete the originals, when we delete the originals the deletion process can lock up our writes, and dramatically slows down our moosefs. Is there a way to make large scale deletions behave a little more nicely? -Thomas S Hatch |
From: Ricardo J. B. <ric...@da...> - 2011-03-01 16:39:40
|
El Martes 01 Marzo 2011, Michal Borychowski escribió: > Hi Thomas! > > Do you use ext3fs? Deleting files on this system is on its own very very > slow. And we assume there is also some mechanism blocking other operations > on disk while deleting built into kernel. These are things we cannot do > anything about it. What filesystem would you recomend: ext4, xfs? (just curious, as I can't find a recommendation on the website). > On the MooseFS side there are really some deletion limits which depend on > the number of chunks necessary to delete. We'll think of introducing some > option to configure this top limit. Nice to know, as we also used to copy a directory with lots of small files, make a tarball and delete the copy. As a workaround, instead of copying the files we symlink the origin directory and 'tar czhf' it. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Michal B. <mic...@ge...> - 2011-03-02 10:51:00
|
El Martes 01 Marzo 2011, Michal Borychowski escribió: >> Hi Thomas! >> >> Do you use ext3fs? Deleting files on this system is on its own very very >> slow. And we assume there is also some mechanism blocking other operations >> on disk while deleting built into kernel. These are things we cannot do >> anything about it. >What filesystem would you recomend: ext4, xfs? (just curious, as I can't find >a recommendation on the website). [MB] Unfortunately we do not have any recommendations for the filesystem on chunkservers. Generally speaking every filesystem is good. Some differences just appear in very big production environments. We use ext3 and are quite satisfied. We wait for your observations :) Kind regards Michal > On the MooseFS side there are really some deletion limits which depend on > the number of chunks necessary to delete. We'll think of introducing some > option to configure this top limit. Nice to know, as we also used to copy a directory with lots of small files, make a tarball and delete the copy. As a workaround, instead of copying the files we symlink the origin directory and 'tar czhf' it. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ ------------------------------------------------------------------------------ Free Software Download: Index, Search & Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-03-02 10:27:17
|
Hi! Thomas, you can try to set CHUNKS_DEL_LIMIT option in mfsmaster.cfg to 10 and see if it helps you for the moment. It depends if you delete lots of files once or you delete them "continuously". Regards Michal From: Thomas S Hatch [mailto:tha...@gm...] Sent: Tuesday, March 01, 2011 12:46 AM To: moosefs-users Subject: [Moosefs-users] Large scale deletions I have run into a problem with Moosefs when I delete a LOT of files at once, in my environment we add hundreds of thousands of files, process them into smaller encoded files, and then delete the originals, when we delete the originals the deletion process can lock up our writes, and dramatically slows down our moosefs. Is there a way to make large scale deletions behave a little more nicely? -Thomas S Hatch |
From: Thomas S H. <tha...@gm...> - 2011-03-02 16:06:45
|
Sure thing, I will give it a try, I have not yet spent much time looking at chunk loops, but it looks like I can refine a lot of the backend processes with it, I will have to play around with it! 2011/3/2 Michal Borychowski <mic...@ge...> > Hi! > > > > Thomas, you can try to set CHUNKS_DEL_LIMIT option in mfsmaster.cfg to 10 > and see if it helps you for the moment. It depends if you delete lots of > files once or you delete them “continuously”. > > > > > > Regards > > Michal > > > > *From:* Thomas S Hatch [mailto:tha...@gm...] > *Sent:* Tuesday, March 01, 2011 12:46 AM > *To:* moosefs-users > *Subject:* [Moosefs-users] Large scale deletions > > > > I have run into a problem with Moosefs when I delete a LOT of files at > once, in my environment we add hundreds of thousands of files, process them > into smaller encoded files, and then delete the originals, when we delete > the originals the deletion process can lock up our writes, and dramatically > slows down our moosefs. > > > > Is there a way to make large scale deletions behave a little more nicely? > > > > -Thomas S Hatch > |