From: Wolfgang <moo...@wo...> - 2015-12-04 12:42:39
|
Dear List! I'm using 6 bananapie's as chunkservers and a virtual Ubuntu as Master. I'm doing a everyday backup of my data to the moose-fs-cluster with the following script: http://pastebin.com/95QK0ywQ It's using rsync but to remain history the script makes a folder like 2015-11-13__06-45-01 and checks for changes - if a file has not changed since the last execution it is making a hardlink to save disc space. (but you have the complete folder-structure on every run) As this script runs on a daily base there are more and more folders - and more and more hardlinks as 99% of the files don't change. Normally the save duration for the moose-fs metadata from the master - shown in the web-gui - takes from 10-25s. Now what I observe is that after abt. 3 weeks the saving of the master-metadata takes longer and longer - until it takes several minutes and the master is getting blocked, load is very high and not getting finished within hours. Before I added swap-space the master process died with out of memory error. Now I added 8GB HDD-Swap-Space so the master isn't dieing but when the master is swapping its very very slow. Then when I manage to delete some of the daily folders from the backup it starts working again and saving metadata is again down to 10-25s Master Ram: 4GB metadata.mfs.back is currently abt. 925MB Total Space within moose-fs: 8.4TB Available 2.9TB du -sh /data/moos/homedirs/ 2.9TB du -sh /data/moos/homedirs/2015-11-13__06-45-01/ 1.5TB So my wild guess is that there are to much hardlinks to manage - is there anything known in that direction - or does anybody knows what I can test further on this issue? Thank you very much & have a nice weekend! Greetings from Austria Wolfgang |