You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Thomas S H. <tha...@gm...> - 2011-04-21 17:15:02
|
For what it is worth, we also use XFS for all of our chunk servers 2011/4/21 boblin <ai...@qq...> > Thanks you , Davies . > > I've read the xfs wikipedia entry , and it is really impressive , I will > do my examinations and post the result later > > > ------------------ 原始邮件 ------------------ > *发件人:* "Davies Liu"<dav...@gm...>; > *发送时间:* 2011年4月22日(星期五) 凌晨0:53 > *收件人:* "boblin"<ai...@qq...>; > *抄送:* "youngcow"<you...@gm...>; "Michal Borychowski"< > mic...@ge...>; "moosefs-users"< > moo...@li...>; > *主题:* Re: [Moosefs-users] 回复: Wrong disk usage in mfs cgi monitor and "df > -h" > > Hi, Bobbin, > > XFS may be better. We use it as default fs. > > Davies > > 在 2011-4-21,下午8:45, boblin 写道: > > Hi michal & youngcow : > > Actually I almost reinstall the whole entire MooseFS system ,and still got > the same result. > > Now I think may be what youngcow had said is right (mke2fs take 5% space). > > At the very fist time (just after I mount the mfs system and before I > create any files) ,It alreay cost 5%, > > but the whole /data1/chunkdata only cost 2MB disk space . > > *chunk-140-242:/data1/chunkdata # find /data1/chunkdata/ -type f* > /data1/chunkdata/.lock > chunk-140-242:/data1/chunkdata # > > *chunk-140-242:/data1/chunkdata # ls -al /data1/chunkdata/* | awk '{ > a+=$5; } END {print a/1024/1024"MB";}' > *2MB > chunk-140-242:/data1/chunkdata # > > > <694...@1C...g> > > As we know , mke2fs will steal 5% precent space which reservced for > superblock . > > Could it be the reason ? > > *master-140-244:~ # mke2fs -j /dev/cciss/c0d1p1 > *mke2fs 1.38 (30-Jun-2005) > Filesystem label= > OS type: Linux > Block size=4096 (log=2) > Fragment size=4096 (log=2) > 89604096 inodes, 179177276 blocks > 8958863 blocks (5.00%) reserved for the super user > First data block=0 > 5469 block groups > 32768 blocks per group, 32768 fragments per group > 16384 inodes per group > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, > 2654208, > 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, > > 102400000 > Writing inode tables: done > Creating journal (32768 blocks): done > Writing superblocks and filesystem accounting information: done > This filesystem will be automatically checked every 35 mounts or > 180 days, whichever comes first. Use tune2fs -c or -i to override. > master-140-244:~ # > > > ------------------ 原始邮件 ------------------ > *发件人:* "youngcow"<you...@gm...>; > *发送时间:* 2011年4月21日(星期四) 下午4:04 > *收件人:* "Michal Borychowski"<mic...@ge...>; > *抄送:* "'boblin'"<ai...@qq...>; "moosefs-users"< > moo...@li...>; > *主题:* Re: [Moosefs-users] Wrong disk usage in mfs cgi monitor and "df -h" > > I think the reason is filesystem uses some space when you format it. > > > Hi! > > > > That really looks a bit strange. We’d blame difference between 'df -h > > /data1' and 'du -sh /data1/chunkdata' on the operating system, and it > > has nothing to do with MooseFS (maybe some difference between protocols > > in kernel or some tools?). What is your operating system? If ‘df’ and > > ‘du’ give wrong values it may be cause of other inconsequences. > > > > And when you make this command: "ls -al /data1/chunkdata/*/*" how many > > “chunk…mfs” files do you have? What is the sum of their sizes? Do > > something like this: > > > > ls -al /data1/chunkdata/*/* | awk '{ a+=$5; } END {print a;}' > > > > it should give the sum of files sizes in bytes. > > > > Regards > > > > Michal > > > > *From:*boblin [mailto:ai...@qq...] > > *Sent:* Wednesday, April 20, 2011 7:18 PM > > *To:* Michal Borychowski > > *Subject:* Re:RE: RE: [Moosefs-users] Wrong disk usage in mfs cgi > > monitor and "df -h" > > > > hi Michal : > > > > Now I've got a problem about disk usage , "df -h" shows that partition > > "/data1" used space is 130M . But "du -hs *" shows only > > > > 1.1M。And the most crazy thing is that it show 35GB in cgi monitor !! > > All my 3 chunkserver have the same problem . > > > > I had read the FAQ and found the "df -h" question ( N * 256 * num_of_hdd > > increased by disk usage),it doesn't help ... > > > > How can I know the exactlly disk usage ? > > > > > > > > > ------------------------------------------------------------------------------ > > Benefiting from Server Virtualization: Beyond Initial Workload > > Consolidation -- Increasing the use of server virtualization is a top > > priority.Virtualization can reduce costs, simplify management, and > improve > > application availability and disaster protection. Learn more about > boosting > > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > > > > > > > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > > the value of server virtualization. > http://p.sf.net/sfu/vmware-sfdev2dev_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: b. <ai...@qq...> - 2011-04-21 17:12:17
|
Thanks you , Davies . I've read the xfs wikipedia entry , and it is really impressive , I will do my examinations and post the result later ------------------ 原始邮件 ------------------ 发件人: "Davies Liu"<dav...@gm...>; 发送时间: 2011年4月22日(星期五) 凌晨0:53 收件人: "boblin"<ai...@qq...>; 抄送: "youngcow"<you...@gm...>; "Michal Borychowski"<mic...@ge...>; "moosefs-users"<moo...@li...>; 主题: Re: [Moosefs-users] 回复: Wrong disk usage in mfs cgi monitor and "df -h" Hi, Bobbin, XFS may be better. We use it as default fs. Davies 在 2011-4-21,下午8:45, boblin 写道: Hi michal & youngcow : Actually I almost reinstall the whole entire MooseFS system ,and still got the same result. Now I think may be what youngcow had said is right (mke2fs take 5% space). At the very fist time (just after I mount the mfs system and before I create any files) ,It alreay cost 5%, but the whole /data1/chunkdata only cost 2MB disk space . chunk-140-242:/data1/chunkdata # find /data1/chunkdata/ -type f /data1/chunkdata/.lock chunk-140-242:/data1/chunkdata # chunk-140-242:/data1/chunkdata # ls -al /data1/chunkdata/* | awk '{ a+=$5; } END {print a/1024/1024"MB";}' 2MB chunk-140-242:/data1/chunkdata # <694...@1C...g> As we know , mke2fs will steal 5% precent space which reservced for superblock . Could it be the reason ? master-140-244:~ # mke2fs -j /dev/cciss/c0d1p1 mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 89604096 inodes, 179177276 blocks 8958863 blocks (5.00%) reserved for the super user First data block=0 5469 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. master-140-244:~ # ------------------ 原始邮件 ------------------ 发件人: "youngcow"<you...@gm...>; 发送时间: 2011年4月21日(星期四) 下午4:04 收件人: "Michal Borychowski"<mic...@ge...>; 抄送: "'boblin'"<ai...@qq...>; "moosefs-users"<moo...@li...>; 主题: Re: [Moosefs-users] Wrong disk usage in mfs cgi monitor and "df -h" I think the reason is filesystem uses some space when you format it. > Hi! > > That really looks a bit strange. We’d blame difference between 'df -h > /data1' and 'du -sh /data1/chunkdata' on the operating system, and it > has nothing to do with MooseFS (maybe some difference between protocols > in kernel or some tools?). What is your operating system? If ‘df’ and > ‘du’ give wrong values it may be cause of other inconsequences. > > And when you make this command: "ls -al /data1/chunkdata/*/*" – how many > “chunk…mfs” files do you have? What is the sum of their sizes? Do > something like this: > > ls -al /data1/chunkdata/*/* | awk '{ a+=$5; } END {print a;}' > > it should give the sum of files sizes in bytes. > > Regards > > Michal > > *From:*boblin [mailto:ai...@qq...] > *Sent:* Wednesday, April 20, 2011 7:18 PM > *To:* Michal Borychowski > *Subject:* Re:RE: RE: [Moosefs-users] Wrong disk usage in mfs cgi > monitor and "df -h" > > hi Michal : > > Now I've got a problem about disk usage , "df -h" shows that partition > "/data1" used space is 130M . But "du -hs *" shows only > > 1.1M。And the most crazy thing is that it show 35GB in cgi monitor !! > All my 3 chunkserver have the same problem . > > I had read the FAQ and found the "df -h" question ( N * 256 * num_of_hdd > increased by disk usage),it doesn't help ... > > How can I know the exactlly disk usage ? > > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev_______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-04-21 16:55:49
|
It DOES have, you can see the result on web interface: Filesystem check info check loop start time check loop end time files under-goal files missing files chunks under-goal chunks missing chunks Thu Apr 21 18:20:11 2011 Thu Apr 21 22:20:27 2011 1382506 0 0 2099965 0 0 在 2011-4-21,下午10:23, Léon Keijser 写道: > Hi, > > I was wondering if mfs has some kind of filesystem check, other than the > ones obviously on the chunkservers. To be more specific, how can i > determine all my files are still intact? > > kind regards, > > > Léon > > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-04-21 16:53:20
|
Hi, Bobbin, XFS may be better. We use it as default fs. Davies 在 2011-4-21,下午8:45, boblin 写道: > Hi michal & youngcow : > > Actually I almost reinstall the whole entire MooseFS system ,and still got the same result. > > Now I think may be what youngcow had said is right (mke2fs take 5% space). > > At the very fist time (just after I mount the mfs system and before I create any files) ,It alreay cost 5%, > > but the whole /data1/chunkdata only cost 2MB disk space . > > chunk-140-242:/data1/chunkdata # find /data1/chunkdata/ -type f > /data1/chunkdata/.lock > chunk-140-242:/data1/chunkdata # > > chunk-140-242:/data1/chunkdata # ls -al /data1/chunkdata/* | awk '{ a+=$5; } END {print a/1024/1024"MB";}' > 2MB > chunk-140-242:/data1/chunkdata # > > > <694...@1C...g> > > As we know , mke2fs will steal 5% precent space which reservced for superblock . > > Could it be the reason ? > > master-140-244:~ # mke2fs -j /dev/cciss/c0d1p1 > mke2fs 1.38 (30-Jun-2005) > Filesystem label= > OS type: Linux > Block size=4096 (log=2) > Fragment size=4096 (log=2) > 89604096 inodes, 179177276 blocks > 8958863 blocks (5.00%) reserved for the super user > First data block=0 > 5469 block groups > 32768 blocks per group, 32768 fragments per group > 16384 inodes per group > Superblock backups stored on blocks: > 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, > 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, > 102400000 > Writing inode tables: done > Creating journal (32768 blocks): done > Writing superblocks and filesystem accounting information: done > This filesystem will be automatically checked every 35 mounts or > 180 days, whichever comes first. Use tune2fs -c or -i to override. > master-140-244:~ # > > > ------------------ 原始邮件 ------------------ > 发件人: "youngcow"<you...@gm...>; > 发送时间: 2011年4月21日(星期四) 下午4:04 > 收件人: "Michal Borychowski"<mic...@ge...>; > 抄送: "'boblin'"<ai...@qq...>; "moosefs-users"<moo...@li...>; > 主题: Re: [Moosefs-users] Wrong disk usage in mfs cgi monitor and "df -h" > > I think the reason is filesystem uses some space when you format it. > > > Hi! > > > > That really looks a bit strange. We’d blame difference between 'df -h > > /data1' and 'du -sh /data1/chunkdata' on the operating system, and it > > has nothing to do with MooseFS (maybe some difference between protocols > > in kernel or some tools?). What is your operating system? If ‘df’ and > > ‘du’ give wrong values it may be cause of other inconsequences. > > > > And when you make this command: "ls -al /data1/chunkdata/*/*" – how many > > “chunk…mfs” files do you have? What is the sum of their sizes? Do > > something like this: > > > > ls -al /data1/chunkdata/*/* | awk '{ a+=$5; } END {print a;}' > > > > it should give the sum of files sizes in bytes. > > > > Regards > > > > Michal > > > > *From:*boblin [mailto:ai...@qq...] > > *Sent:* Wednesday, April 20, 2011 7:18 PM > > *To:* Michal Borychowski > > *Subject:* Re:RE: RE: [Moosefs-users] Wrong disk usage in mfs cgi > > monitor and "df -h" > > > > hi Michal : > > > > Now I've got a problem about disk usage , "df -h" shows that partition > > "/data1" used space is 130M . But "du -hs *" shows only > > > > 1.1M。And the most crazy thing is that it show 35GB in cgi monitor !! > > All my 3 chunkserver have the same problem . > > > > I had read the FAQ and found the "df -h" question ( N * 256 * num_of_hdd > > increased by disk usage),it doesn't help ... > > > > How can I know the exactlly disk usage ? > > > > > > > > ------------------------------------------------------------------------------ > > Benefiting from Server Virtualization: Beyond Initial Workload > > Consolidation -- Increasing the use of server virtualization is a top > > priority.Virtualization can reduce costs, simplify management, and improve > > application availability and disaster protection. Learn more about boosting > > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > > > > > > > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-04-21 16:50:34
|
MooseFS was designed for large file storage, if you have so many small files, you should choose another solutions, just like beansdb[1] or Riak [2]. [1] http://code.google.com/p/beansdb/ [2] http://wiki.basho.com/ 在 2011-4-21,下午8:03, ha...@si... 写道: > > i have some doubts,now ,please answer me : > > problem one: About MooseFS,i had asked the question : what size does it support the single file? and what size about the total numbers of file?? > and your answer is: > "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " > > and today,your reply is "if it supports 3000 million files? We would need about 3 > 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.g."no limit for the total size of all files in the system"). > please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? > > > > > > That's all ,thanks a lot! > > Sincerely look forward to your reply! > > > > Best regards! > > Hanyw ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev_______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Léon K. <ke...@st...> - 2011-04-21 14:23:34
|
Hi, I was wondering if mfs has some kind of filesystem check, other than the ones obviously on the chunkservers. To be more specific, how can i determine all my files are still intact? kind regards, Léon |
From: Michal B. <mic...@ge...> - 2011-04-21 12:45:23
|
Yes, yes :) And best quick swap on SSD :) Regards Michal -----Original Message----- From: Steve [mailto:st...@bo...] Sent: Thursday, April 21, 2011 2:40 PM To: ha...@si...; Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] i have some doubts,now ,please answer me or part ram part swap ? -------Original Message------- From: Michal Borychowski Date: 21/04/2011 13:10:05 To: ha...@si... Cc: moo...@li... Subject: Re: [Moosefs-users] I have some doubts,now ,please answer me Hi! There is (theoretically) no limit for the number of files in the system. For 300 million files you’ll need: 300 * 300 MB = 87.8 GB of RAM Best regards Michal Borychowski From: ha...@si... [mailto:ha...@si...] Sent: Thursday, April 21, 2011 2:03 PM To: moo...@li... Subject: [Moosefs-users] I have some doubts,now ,please answer me I have some doubts,now ,please answer me : problem one: About MooseFS,I had asked the question : what size does it support the single file? and what size about the total numbers of file?? and your answer is: "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " and today,your reply is "if it supports 3000 million files? We would need about 3 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.G."no limit for the total size of all files in the system"). please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |
From: Steve <st...@bo...> - 2011-04-21 12:40:28
|
or part ram part swap ? -------Original Message------- From: Michal Borychowski Date: 21/04/2011 13:10:05 To: ha...@si... Cc: moo...@li... Subject: Re: [Moosefs-users] I have some doubts,now ,please answer me Hi! There is (theoretically) no limit for the number of files in the system. For 300 million files you’ll need: 300 * 300 MB = 87.8 GB of RAM Best regards Michal Borychowski From: ha...@si... [mailto:ha...@si...] Sent: Thursday, April 21, 2011 2:03 PM To: moo...@li... Subject: [Moosefs-users] I have some doubts,now ,please answer me I have some doubts,now ,please answer me : problem one: About MooseFS,I had asked the question : what size does it support the single file? and what size about the total numbers of file?? and your answer is: "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " and today,your reply is "if it supports 3000 million files? We would need about 3 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.G."no limit for the total size of all files in the system"). please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |
From: Michal B. <mic...@ge...> - 2011-04-21 12:25:49
|
Hi! We let the system to manage swap usage. There are some plans to further optimize metadata, but for the moment we do not interfere with the swap on the level of MooseFS. Regards Michal From: Fyodor Ustinov [mailto:uf...@uf...] Sent: Thursday, April 21, 2011 2:14 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] i have some doubts,now ,please answer me On 04/21/2011 03:09 PM, Michal Borychowski wrote: Hi! There is (theoretically) no limit for the number of files in the system. For 300 million files you’ll need: 300 * 300 MB = 87.8 GB of RAM Michal, whether or not reasonable to use the swap in such cases? All the files are not simultaneously used, and some information about them can be moved to swap? Best regards Michal Borychowski From: ha...@si... [mailto:ha...@si...] Sent: Thursday, April 21, 2011 2:03 PM To: moo...@li... Subject: [Moosefs-users] i have some doubts,now ,please answer me i have some doubts,now ,please answer me : problem one: About MooseFS,i had asked the question : what size does it support the single file? and what size about the total numbers of file?? and your answer is: "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " and today,your reply is "if it supports 3000 million files? We would need about 3 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.g."no limit for the total size of all files in the system"). please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Fyodor U. <uf...@uf...> - 2011-04-21 12:14:26
|
On 04/21/2011 03:09 PM, Michal Borychowski wrote: > > Hi! > > There is (theoretically) no limit for the number of files in the system. > > For 300 million files you’ll need: 300 * 300 MB = 87.8 GB of RAM > Michal, whether or not reasonable to use the swap in such cases? All the files are not simultaneously used, and some information about them can be moved to swap? > Best regards > > Michal Borychowski > > *From:*ha...@si... [mailto:ha...@si...] > *Sent:* Thursday, April 21, 2011 2:03 PM > *To:* moo...@li... > *Subject:* [Moosefs-users] i have some doubts,now ,please answer me > > > i have some doubts,now ,please answer me : > > problem one: About MooseFS,i had asked the question : what size does > it support the single file? and what size about the total numbers of > file?? > and your answer is: > "Limit of 2TiB is for a single file. There is no limit for the total > size of all files in the system. " > > and today,your reply is "if it supports 3000 million files? We would > need about 3 > 000 * 300 MB = 878 GB of RAM which would be rather impossible."and > today's answer conflict with that statement(e.g."no limit for the > total size of all files in the system"). > please tell me ,for 300 million files, what size is the RAM that we > need in the system???, as you say,it needs 87 GB of RAM,or "no > limit"????? > > > > > > That's all ,thanks a lot! > > Sincerely look forward to your reply! > > > > Best regards! > > Hanyw > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-21 12:09:40
|
Hi! There is (theoretically) no limit for the number of files in the system. For 300 million files you’ll need: 300 * 300 MB = 87.8 GB of RAM Best regards Michal Borychowski From: ha...@si... [mailto:ha...@si...] Sent: Thursday, April 21, 2011 2:03 PM To: moo...@li... Subject: [Moosefs-users] i have some doubts,now ,please answer me i have some doubts,now ,please answer me : problem one: About MooseFS,i had asked the question : what size does it support the single file? and what size about the total numbers of file?? and your answer is: "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " and today,your reply is "if it supports 3000 million files? We would need about 3 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.g."no limit for the total size of all files in the system"). please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |
From: <ha...@si...> - 2011-04-21 12:03:42
|
i have some doubts,now ,please answer me : problem one: About MooseFS,i had asked the question : what size does it support the single file? and what size about the total numbers of file?? and your answer is: "Limit of 2TiB is for a single file. There is no limit for the total size of all files in the system. " and today,your reply is "if it supports 3000 million files? We would need about 3 000 * 300 MB = 878 GB of RAM which would be rather impossible."and today's answer conflict with that statement(e.g."no limit for the total size of all files in the system"). please tell me ,for 300 million files, what size is the RAM that we need in the system???, as you say,it needs 87 GB of RAM,or "no limit"????? That's all ,thanks a lot! Sincerely look forward to your reply! Best regards! Hanyw |
From: Michal B. <mic...@ge...> - 2011-04-21 11:24:54
|
Hi! But the default chunk size of 64MB is hard coded and we do not recommend to tamper with it as unforeseen things may happen. Best regards -Michal -----Original Message----- From: Davies Liu [mailto:dav...@gm...] Sent: Thursday, April 21, 2011 1:18 PM To: Heiko Schröter Cc: moo...@li... Subject: Re: [Moosefs-users] Chunks Hi, you could modify default chunk size 64M to very high value, then a file has only one chunk, and in one chunk server. Davies 在 2011-2-24,下午4:24, Heiko Schröter 写道: > Hello, > > we are currently investigating moosefs as a successor of our 200TB storage cfs. > mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 > Everything is working fine. > > We have a question about the way mfs handles chunks. > Is it possible to keep the chunks on a single chunkserver, instead of "load balance" them to all chunkservers ? > > Reason is that in case of a total unrecoverable loss of a single chunkserver we would loose some files completly. > But that would be better to us than loosing some parts in all files. > > Incrementing the goal is not an option since the storage capacity is limited. > > Thanks and Regards > Heiko > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Davies L. <dav...@gm...> - 2011-04-21 11:18:31
|
Hi, you could modify default chunk size 64M to very high value, then a file has only one chunk, and in one chunk server. Davies 在 2011-2-24,下午4:24, Heiko Schröter 写道: > Hello, > > we are currently investigating moosefs as a successor of our 200TB storage cfs. > mfs-1.6.20, 2.6.36-gentoo-r5, x86_64, fuse 2.8.5 > Everything is working fine. > > We have a question about the way mfs handles chunks. > Is it possible to keep the chunks on a single chunkserver, instead of "load balance" them to all chunkservers ? > > Reason is that in case of a total unrecoverable loss of a single chunkserver we would loose some files completly. > But that would be better to us than loosing some parts in all files. > > Incrementing the goal is not an option since the storage capacity is limited. > > Thanks and Regards > Heiko > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Boli <bo...@le...> - 2011-04-21 09:18:08
|
Does it have anything to do with reserved space for root on the filesystem? (tune2fs -m 1 , or similar...) On 21/04/2011 11:04, youngcow wrote: > I think the reason is filesystem uses some space when you format it. > >> Hi! >> >> That really looks a bit strange. We’d blame difference between 'df -h >> /data1' and 'du -sh /data1/chunkdata' on the operating system, and it >> has nothing to do with MooseFS (maybe some difference between protocols >> in kernel or some tools?). What is your operating system? If ‘df’ and >> ‘du’ give wrong values it may be cause of other inconsequences. >> >> And when you make this command: "ls -al /data1/chunkdata/*/*" – how many >> “chunk…mfs” files do you have? What is the sum of their sizes? Do >> something like this: >> >> ls -al /data1/chunkdata/*/* | awk '{ a+=$5; } END {print a;}' >> >> it should give the sum of files sizes in bytes. >> >> Regards >> >> Michal >> >> *From:*boblin [mailto:ai...@qq...] >> *Sent:* Wednesday, April 20, 2011 7:18 PM >> *To:* Michal Borychowski >> *Subject:* Re:RE: RE: [Moosefs-users] Wrong disk usage in mfs cgi >> monitor and "df -h" >> >> hi Michal : >> >> Now I've got a problem about disk usage , "df -h" shows that partition >> "/data1" used space is 130M . But "du -hs *" shows only >> >> 1.1M。And the most crazy thing is that it show 35GB in cgi monitor !! >> All my 3 chunkserver have the same problem . >> >> I had read the FAQ and found the "df -h" question ( N * 256 * num_of_hdd >> increased by disk usage),it doesn't help ... >> >> How can I know the exactlly disk usage ? >> >> >> >> ------------------------------------------------------------------------------ >> Benefiting from Server Virtualization: Beyond Initial Workload >> Consolidation -- Increasing the use of server virtualization is a top >> priority.Virtualization can reduce costs, simplify management, and improve >> application availability and disaster protection. Learn more about boosting >> the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev >> >> >> >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: youngcow <you...@gm...> - 2011-04-21 08:04:26
|
I think the reason is filesystem uses some space when you format it. > Hi! > > That really looks a bit strange. We’d blame difference between 'df -h > /data1' and 'du -sh /data1/chunkdata' on the operating system, and it > has nothing to do with MooseFS (maybe some difference between protocols > in kernel or some tools?). What is your operating system? If ‘df’ and > ‘du’ give wrong values it may be cause of other inconsequences. > > And when you make this command: "ls -al /data1/chunkdata/*/*" – how many > “chunk…mfs” files do you have? What is the sum of their sizes? Do > something like this: > > ls -al /data1/chunkdata/*/* | awk '{ a+=$5; } END {print a;}' > > it should give the sum of files sizes in bytes. > > Regards > > Michal > > *From:*boblin [mailto:ai...@qq...] > *Sent:* Wednesday, April 20, 2011 7:18 PM > *To:* Michal Borychowski > *Subject:* Re:RE: RE: [Moosefs-users] Wrong disk usage in mfs cgi > monitor and "df -h" > > hi Michal : > > Now I've got a problem about disk usage , "df -h" shows that partition > "/data1" used space is 130M . But "du -hs *" shows only > > 1.1M。And the most crazy thing is that it show 35GB in cgi monitor !! > All my 3 chunkserver have the same problem . > > I had read the FAQ and found the "df -h" question ( N * 256 * num_of_hdd > increased by disk usage),it doesn't help ... > > How can I know the exactlly disk usage ? > > > > ------------------------------------------------------------------------------ > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-04-21 07:16:53
|
Hi! This is a kernel issue. There are some limits for parallel I/O operations. You can run 'df' in the same time - 'df' is operated independently from normal I/O operations and doesn't locks up. In real environment we do not have such lockups. Is it your test environment or a real life one? Probably you could do some kernel tuning to increase amount of parallel I/O operations. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: [Redhosting] Egbert Groot [mailto:eg...@re...] Sent: Wednesday, April 20, 2011 2:22 PM To: moo...@li... Subject: [Moosefs-users] Lockups Hi All, I've a test setup with one moosefs master and two chunkservers with each one disk. All three servers do mount the moosefs filesystem. When one server is heavily loaded, using the mounted mfs quite intense, every 20-30 seconds, the mfs mount seems to get locked. I can't do 'ls -l /mfsount'. It does hang for a couple of seconds (about 10 seconds). If at the same time I do the 'ls -l /mfsmount' on one of the other servers, there's no problem. So the lockup doesn't seem to be in the master server, but in the client server itself. How can I debug this is issue? Is there a way to let fuse log this kind of info? regards, Egbert. |
From: b. <ai...@qq...> - 2011-04-21 03:36:13
|
For spec file working ,you should re-tar the original tarball 1. tar -zxvf mfs-1.6.20-2.tar.gz 2. mv mfs-1.6.20-2 mfs-1.6.20 3. rm -f mfs-1.6.20-2.tar.gz && tar -zcvf mfs-1.6.20-2.tar.gz mfs-1.6.20 4. throw it into /usr/src/redhat/SOURCES 5. now you can use rpmbuild command to build the rpms |
From: Scoleri, S. <Sco...@gs...> - 2011-04-20 23:36:56
|
The spec file doesn't know how to deal with the "-2" and the rpmbuild fails. I haven't had time to try to figure it out has anyone else? Thanks, -Scoleri |
From: Thomas S H. <tha...@gm...> - 2011-04-20 15:27:10
|
Since my name came up while I was sleeping I will comment. My goal with mfs-failover (https://github.com/thatch45/mfs-failover) is to make is operate without the overhead of extra layers like vm level failover and drbd solutions. With that said, kudos to finding these solutions, I hope I can hammer out my solution soon. -Thomas S Hatch |
From: Fabien G. <fab...@gm...> - 2011-04-20 13:14:06
|
Hello, 2011/4/20 Alexander Akhobadze <akh...@ri...> > I had not any performance issues because of virtualization. > > But ... ESX is for money and MooseFS is for free ;--) > It seems you can use KVM/Xen + Kemari (see http://www.osrg.net/kemari/ and http://wiki.qemu.org/Features/FaultTolerance), and get the same kind of fault tolerance infrastructure... for free. If someone has ever tried, I'd be really happy to read about his experience... Fabien |
From: [Redhosting] E. G. <eg...@re...> - 2011-04-20 12:47:48
|
Hi All, I've a test setup with one moosefs master and two chunkservers with each one disk. All three servers do mount the moosefs filesystem. When one server is heavily loaded, using the mounted mfs quite intense, every 20-30 seconds, the mfs mount seems to get locked. I can't do 'ls -l /mfsount'. It does hang for a couple of seconds (about 10 seconds). If at the same time I do the 'ls -l /mfsmount' on one of the other servers, there's no problem. So the lockup doesn't seem to be in the master server, but in the client server itself. How can I debug this is issue? Is there a way to let fuse log this kind of info? regards, Egbert. |
From: Alexander A. <akh...@ri...> - 2011-04-20 09:09:59
|
Hi ALL! We also use MooseFS on virtual ESX4.1 environment and I can share my experience. Virtual machines are Open SuSE 2.6.34.7 On MooseFS we have around 1 000 000 fs objects # ls -l /moosefs/metadata/metadata.mfs.back -rw-r----- 1 mfs mfs 121913374 Apr 20 12:00 /moosefs/metadata/metadata.mfs.back 7 MFS-client connected (mfs is shared by Samba for Windows clients (including terminal server farm and Winsows XP stations) I had not any performance issues because of virtualization. But ... ESX is for money and MooseFS is for free ;--) wbr Alexander Akhobadze ====================================================== Hi! On physical hosts it is VMWare ESX4.1, on virtual machines it is Debian Squeeze. Test environment had 5 chunkservers (some older physical servers with one CPU Xeon, 1GB RAM, 1TB SATA drives, Debian Squeeze), mfsmaster and mfsmetalogger on virtual machines and 5 clients using MFS. W have not noticed any difficulties. Everything is connected with 1Gbps network. There wasn't many files in MFS environment, about 10k, 1TB of data. In one-two months we plan to move whole hosting enviroment (about 6TB of data, don't know number of files, but it is high :) ) to MFS and then I will be able to give you some more detailed performance info. >From my experience, there is no big impact on performance in virtual machines. We have virtualized about 30 servers last year, and I don't see any difference in performance between physical servers and virtual machines. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: Niniejsza wiadomość przekazana została Państwu przez ecenter sp z o.o. 87-100 Toruń, Ul. Goździkowa 2 Zarejestrowana w Sądzie Rejonowym w Toruniu VII Wydział Gospodarczy Krajowego Rejestru Sądowego pod numerem 0000251110 Z kapitałem zakładowym w wysokości 142500zł NIP 956-216-66-73 -----Original Message----- From: Michal Borychowski [mailto:mic...@ge...] Sent: Wednesday, April 20, 2011 9:29 AM To: 'Krzysztof Janiszewski - ecenter sp. z o.o.' Cc: moo...@li... Subject: RE: [Moosefs-users] Fw: Re: chunkserver over several offices Hi! The solution is very smart. In what environment have you done the tests? What operating systems (for the physical machine and virtual machine). How many files there were in MooseFS? How big was metadata file? How many clients connected? Was the whole MooseFS quite busy? What about performance of the master running in virtual machines? Kind regards -Michal -----Original Message----- From: Krzysztof Janiszewski - ecenter sp. z o.o. [mailto:k.j...@ec...] Sent: Wednesday, April 20, 2011 9:06 AM To: 'Boyko Yordanov' Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices We have found failover solution for mfsserver. Our mfsserver is running on VMWare virtual machine, which is configured in Fault tolerance mode. This means that there are in fact two running virtual machines on two physical hosts. Primary VM and secondary VM are doing the same CPU operations. When primary VM fails, secondary takes over all tasks and everything is runnining fine without interruption or data loss. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: -----Original Message----- From: Boyko Yordanov [mailto:b.y...@ex...] Sent: Wednesday, April 20, 2011 8:48 AM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices Hi, Great news indeed, however, isn't it more important to implement reliable failover solution and fix the single point of failure? So far there is no real working failover solution, not even Thomas Hatch's one. Its just that mfsmetalogger can't be trusted. Boyko On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: >> I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. |
From: Krzysztof J. - e. s. z o.o. <k.j...@ec...> - 2011-04-20 08:03:53
|
Boyko, it is not possible to have more then one copy of virtual machine, but I think it is not necessary. When primary VM fails, secondary takes over everything and becomes primary. Then VMWare automatically creates new secondary VM on next available physical host. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: -----Original Message----- From: Boyko Yordanov [mailto:b.y...@ex...] Sent: Wednesday, April 20, 2011 9:57 AM To: Krzysztof Janiszewski - ecenter sp. z o.o. Cc: moo...@li...; Michal Borychowski Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices Hi, Just want to add a question, As I'm not familiar w/ VMware's fault tolerance mode, is it possible to configure more than one 'backup' vm? E.g. having 2 or more backup copies of the same vm? Boyko On Apr 20, 2011, at 10:28 AM, Michal Borychowski wrote: > Hi! > > The solution is very smart. In what environment have you done the tests? > What operating systems (for the physical machine and virtual machine). > How many files there were in MooseFS? How big was metadata file? How > many clients connected? Was the whole MooseFS quite busy? What about > performance of the master running in virtual machines? > > > Kind regards > -Michal > > -----Original Message----- > From: Krzysztof Janiszewski - ecenter sp. z o.o. > [mailto:k.j...@ec...] > Sent: Wednesday, April 20, 2011 9:06 AM > To: 'Boyko Yordanov' > Cc: moo...@li... > Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices > > We have found failover solution for mfsserver. Our mfsserver is > running on VMWare virtual machine, which is configured in Fault > tolerance mode. This means that there are in fact two running virtual > machines on two physical hosts. Primary VM and secondary VM are doing > the same CPU operations. When primary VM fails, secondary takes over > all tasks and everything is runnining fine without interruption or data loss. > > Best regards > Krzysztof Janiszewski > ecenter sp. z o.o. > -------------------------------------- > Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: > > > -----Original Message----- > From: Boyko Yordanov [mailto:b.y...@ex...] > Sent: Wednesday, April 20, 2011 8:48 AM > To: Michal Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices > > Hi, > > Great news indeed, however, isn't it more important to implement > reliable failover solution and fix the single point of failure? > > So far there is no real working failover solution, not even Thomas > Hatch's one. Its just that mfsmetalogger can't be trusted. > > Boyko > > On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: > >>> I've got great news - we are going to introduce big improvements in > upcoming 1.6.21 version which also include "rack awaraness" :) This is > a feature lots of people were waiting for and I hope it will cater to > your needs. >> > > > ---------------------------------------------------------------------- > ------ > -- > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation > -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and > improve application availability and disaster protection. Learn more > about boosting the value of server virtualization. > http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ---------------------------------------------------------------------- > ------ > -- > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and > improve application availability and disaster protection. Learn more > about boosting the value of server virtualization. > http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ---------------------------------------------------------------------- > -------- Benefiting from Server Virtualization: Beyond Initial > Workload Consolidation -- Increasing the use of server virtualization > is a top priority.Virtualization can reduce costs, simplify > management, and improve application availability and disaster > protection. Learn more about boosting the value of server > virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Krzysztof J. - e. s. z o.o. <k.j...@ec...> - 2011-04-20 08:00:46
|
Hi! On physical hosts it is VMWare ESX4.1, on virtual machines it is Debian Squeeze. Test environment had 5 chunkservers (some older physical servers with one CPU Xeon, 1GB RAM, 1TB SATA drives, Debian Squeeze), mfsmaster and mfsmetalogger on virtual machines and 5 clients using MFS. W have not noticed any difficulties. Everything is connected with 1Gbps network. There wasn't many files in MFS environment, about 10k, 1TB of data. In one-two months we plan to move whole hosting enviroment (about 6TB of data, don't know number of files, but it is high :) ) to MFS and then I will be able to give you some more detailed performance info. >From my experience, there is no big impact on performance in virtual machines. We have virtualized about 30 servers last year, and I don't see any difference in performance between physical servers and virtual machines. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: Niniejsza wiadomość przekazana została Państwu przez ecenter sp z o.o. 87-100 Toruń, Ul. Goździkowa 2 Zarejestrowana w Sądzie Rejonowym w Toruniu VII Wydział Gospodarczy Krajowego Rejestru Sądowego pod numerem 0000251110 Z kapitałem zakładowym w wysokości 142500zł NIP 956-216-66-73 -----Original Message----- From: Michal Borychowski [mailto:mic...@ge...] Sent: Wednesday, April 20, 2011 9:29 AM To: 'Krzysztof Janiszewski - ecenter sp. z o.o.' Cc: moo...@li... Subject: RE: [Moosefs-users] Fw: Re: chunkserver over several offices Hi! The solution is very smart. In what environment have you done the tests? What operating systems (for the physical machine and virtual machine). How many files there were in MooseFS? How big was metadata file? How many clients connected? Was the whole MooseFS quite busy? What about performance of the master running in virtual machines? Kind regards -Michal -----Original Message----- From: Krzysztof Janiszewski - ecenter sp. z o.o. [mailto:k.j...@ec...] Sent: Wednesday, April 20, 2011 9:06 AM To: 'Boyko Yordanov' Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices We have found failover solution for mfsserver. Our mfsserver is running on VMWare virtual machine, which is configured in Fault tolerance mode. This means that there are in fact two running virtual machines on two physical hosts. Primary VM and secondary VM are doing the same CPU operations. When primary VM fails, secondary takes over all tasks and everything is runnining fine without interruption or data loss. Best regards Krzysztof Janiszewski ecenter sp. z o.o. -------------------------------------- Domeny, hosting, poczta wideo :: http://www.ecenter.pl :: -----Original Message----- From: Boyko Yordanov [mailto:b.y...@ex...] Sent: Wednesday, April 20, 2011 8:48 AM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Fw: Re: chunkserver over several offices Hi, Great news indeed, however, isn't it more important to implement reliable failover solution and fix the single point of failure? So far there is no real working failover solution, not even Thomas Hatch's one. Its just that mfsmetalogger can't be trusted. Boyko On Apr 20, 2011, at 8:18 AM, Michal Borychowski wrote: >> I've got great news - we are going to introduce big improvements in upcoming 1.6.21 version which also include "rack awaraness" :) This is a feature lots of people were waiting for and I hope it will cater to your needs. > ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ---------------------------------------------------------------------------- -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |