From: a. <ai...@qq...> - 2011-04-14 15:52:13
|
Dear guys, We are plan to use moosefs as our store mechanism, which will be more than 5TB off-line command log. But now we have someconcern, Please find my queries below: ----- 1. is there any plan about resolving the single-point failure of master ? 2. The data will lost if master crash without dumping metadata from RAM into disk,is it right? 3. How can I know which data were lost if the question #2 happend? 4. Dose the master only caches metadata read-only or will update the data in RAM and flush to disk periodically? 5. Are the change logs based on the metadata.mfs.back or the data in RAM ? 6. Can I change the default dumpling behaviour for master metadata from per hour to user defined value? Such as 30 min? 7. How can we make sure no data lost at any time with HA(KeepAlived)+ DRBD + mfs solution ? 8. The mfsmetaloggr didn't re-connect automatically even though I had stoped and restarted the mfsmater, can only by manual? 9. which option in mfsmaster.cfg controls the rotation of changelog ? according to the size ? or each time when mfsmaster dump the metadata ? Any help will be appreciated. Looking forward your answer. Best Regard Bob Lin |