You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Thomas S H. <tha...@gm...> - 2011-01-15 07:02:20
|
Heh, I was right, something outside the scope of what i was considering. This gets tricky, what you are suggesting is to layer storage paradigms to create an environment which MooseFS cannot server on top of MooseFS. But on the other hand, finding a storage platform that gives you all of this at the platform level is hard.... maybe impossible. Also this approach creates tiers of single redundancy services, it is very vulnerable. So to sum up, what you want does not exist, this I think you already know, the problem is how to create it. I am afraid that I don't know how to create what you need, and while having a massive single block device on top of MooseFS could be used in a cloud environment it would also not act as an optimal use of MooseFS. But, I do agree with one thing, quotas will be great. -Tom 2011/1/14 颜秉珩 <rw...@12...> > Because there is no quota function in MFS. > > How can we provide multitenancy function based on MFS? > > > 2011-01-15 > ------------------------------ > ------------------------------ > The last thing I want to do start a flame war, but I don't understand > how you are setting up your cloud infrastructure where a 2T vm image is a > good idea. > > My initial impression is that there is something about the deployment that > would justify that kind of vm image. > > If you need to access large amounts of data storage from a vm, then the vm > should mount the moosefs share, maintaining a virtual machine disk image of > that size is bad for performance (every write translates into large scale > chunk changes) > > If you want to use a distributed file system for a cloud infrastructure > (and mooseFS is an excellent choice) then I am going to recommended that you > limit the writes to the virtual machine image, and offload as much as you > can to mounts of the master moosefs filesystem. > > In my deployments I have all storage class file access on moosefs mounts > and it has greatly improved the performance of the filesystem and the > virtual machines. > > -Tom Hatch > > On Fri, Jan 14, 2011 at 10:19 PM, 颜秉珩 <rw...@12...> wrote: > >> >> we use MFS to create virtual block device, >> >> A block device larger than 2T is much popular nowadays! >> >> So I think 2T limitation impacts the MFS application in this area >> (cloud storage environment) >> >> >> ------------------------------ >> *发件人:* Michal Borychowski >> *发送时间:* 2010-12-30 15:25:39 >> *收件人:* 'yanbh' >> *抄送:* 'moosefs-users' >> *主题:* Re: [Moosefs-users] How many mfs clients can be supported by >> thesame master at the same time!! >> >> Hi! >> >> *From:* yanbh [mailto:ya...@in...] >> *Sent:* Thursday, December 30, 2010 7:49 AM >> *To:* moosefs-users >> *Subject:* [Moosefs-users] How many mfs clients can be supported by the >> same master at the same time!! >> >> >> >> Dear all! >> >> >> >> I have some questions about MFS, as following! >> >> 1. how many mfsclient can be supported by the same master server at the >> same time? >> >> what about 100 client mounting the same mfs master at the same? what >> about the performace? >> >> *[MB] 100 clients would not be a problem. Performance should also not be >> affected (metadata in mfs master are kept in RAM for speed). Please have a >> look here: >> http://80.48.16.122/mfs.cgi?masterport=9421&mastername=bellona_main§ions=MSfor our installation. >> * >> >> >> >> 2. we know the mfs can only support a file whose size less than 2T, this >> is a bad limitation, when to remove it ? >> >> any plan? >> >> *[MB] Is it really a big problem for you? Can’t you divide the files to >> be not bigger than 2TB? For the moment removing this limiation doesn’t have >> big priority.* >> >> ** >> >> >> >> whatever, Thx for the developers giving us a execlent DFS!! >> >> *[MB] :)* >> >> ** >> >> *If you need any further assistance please let us know.* >> >> ** >> >> *Kind regards* >> >> *Micha?Borychowski * >> >> *MooseFS Support Manager* >> >> *_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _* >> >> *Gemius S.A.* >> >> *ul. Woska 7, 02-672 Warszawa* >> >> *Budynek MARS, klatka D* >> >> *Tel.: +4822 874-41-00* >> >> *Fax : +4822 874-41-01* >> >> >> >> Best >> >> Regards! >> >> >> >> 2010-12-30 >> ------------------------------ >> >> *yanbh* >> >> >> ------------------------------------------------------------------------------ >> Protect Your Site and Customers from Malware Attacks >> Learn about various malware tactics and how to avoid them. Understand >> malware threats, the impact they can have on your business, and how you >> can protect your company and customers by using code signing. >> http://p.sf.net/sfu/oracle-sfdevnl >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > |
From: 颜. <rw...@12...> - 2011-01-15 06:46:04
|
Because there is no quota function in MFS. How can we provide multitenancy function based on MFS? add some explaination for multitenancy: 1. quota 2. access control 3. easy to access by Linux/Windows So we create large image and wrape it to provide for different guest os. 2011-01-15 The last thing I want to do start a flame war, but I don't understand how you are setting up your cloud infrastructure where a 2T vm image is a good idea. My initial impression is that there is something about the deployment that would justify that kind of vm image. If you need to access large amounts of data storage from a vm, then the vm should mount the moosefs share, maintaining a virtual machine disk image of that size is bad for performance (every write translates into large scale chunk changes) If you want to use a distributed file system for a cloud infrastructure (and mooseFS is an excellent choice) then I am going to recommended that you limit the writes to the virtual machine image, and offload as much as you can to mounts of the master moosefs filesystem. In my deployments I have all storage class file access on moosefs mounts and it has greatly improved the performance of the filesystem and the virtual machines. -Tom Hatch On Fri, Jan 14, 2011 at 10:19 PM, 颜秉珩 <rw...@12...> wrote: we use MFS to create virtual block device, A block device larger than 2T is much popular nowadays! So I think 2T limitation impacts the MFS application in this area (cloud storage environment) 发件人: Michal Borychowski 发送时间: 2010-12-30 15:25:39 收件人: 'yanbh' 抄送: 'moosefs-users' 主题: Re: [Moosefs-users] How many mfs clients can be supported by thesame master at the same time!! Hi! From: yanbh [mailto:ya...@in...] Sent: Thursday, December 30, 2010 7:49 AM To: moosefs-users Subject: [Moosefs-users] How many mfs clients can be supported by the same master at the same time!! Dear all! I have some questions about MFS, as following! 1. how many mfsclient can be supported by the same master server at the same time? what about 100 client mounting the same mfs master at the same? what about the performace? [MB] 100 clients would not be a problem. Performance should also not be affected (metadata in mfs master are kept in RAM for speed). Please have a look here: http://80.48.16.122/mfs.cgi?masterport=9421&mastername=bellona_main§ions=MS for our installation. 2. we know the mfs can only support a file whose size less than 2T, this is a bad limitation, when to remove it ? any plan? [MB] Is it really a big problem for you? Can’t you divide the files to be not bigger than 2TB? For the moment removing this limiation doesn’t have big priority. whatever, Thx for the developers giving us a execlent DFS!! [MB] :) If you need any further assistance please let us know. Kind regards Micha?Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Woska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Best Regards! 2010-12-30 yanbh ------------------------------------------------------------------------------ Protect Your Site and Customers from Malware Attacks Learn about various malware tactics and how to avoid them. Understand malware threats, the impact they can have on your business, and how you can protect your company and customers by using code signing. http://p.sf.net/sfu/oracle-sfdevnl _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: 颜. <rw...@12...> - 2011-01-15 06:41:08
|
Because there is no quota function in MFS. How can we provide multitenancy function based on MFS? 2011-01-15 The last thing I want to do start a flame war, but I don't understand how you are setting up your cloud infrastructure where a 2T vm image is a good idea. My initial impression is that there is something about the deployment that would justify that kind of vm image. If you need to access large amounts of data storage from a vm, then the vm should mount the moosefs share, maintaining a virtual machine disk image of that size is bad for performance (every write translates into large scale chunk changes) If you want to use a distributed file system for a cloud infrastructure (and mooseFS is an excellent choice) then I am going to recommended that you limit the writes to the virtual machine image, and offload as much as you can to mounts of the master moosefs filesystem. In my deployments I have all storage class file access on moosefs mounts and it has greatly improved the performance of the filesystem and the virtual machines. -Tom Hatch On Fri, Jan 14, 2011 at 10:19 PM, 颜秉珩 <rw...@12...> wrote: we use MFS to create virtual block device, A block device larger than 2T is much popular nowadays! So I think 2T limitation impacts the MFS application in this area (cloud storage environment) 发件人: Michal Borychowski 发送时间: 2010-12-30 15:25:39 收件人: 'yanbh' 抄送: 'moosefs-users' 主题: Re: [Moosefs-users] How many mfs clients can be supported by thesame master at the same time!! Hi! From: yanbh [mailto:ya...@in...] Sent: Thursday, December 30, 2010 7:49 AM To: moosefs-users Subject: [Moosefs-users] How many mfs clients can be supported by the same master at the same time!! Dear all! I have some questions about MFS, as following! 1. how many mfsclient can be supported by the same master server at the same time? what about 100 client mounting the same mfs master at the same? what about the performace? [MB] 100 clients would not be a problem. Performance should also not be affected (metadata in mfs master are kept in RAM for speed). Please have a look here: http://80.48.16.122/mfs.cgi?masterport=9421&mastername=bellona_main§ions=MS for our installation. 2. we know the mfs can only support a file whose size less than 2T, this is a bad limitation, when to remove it ? any plan? [MB] Is it really a big problem for you? Can’t you divide the files to be not bigger than 2TB? For the moment removing this limiation doesn’t have big priority. whatever, Thx for the developers giving us a execlent DFS!! [MB] :) If you need any further assistance please let us know. Kind regards Micha?Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Woska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Best Regards! 2010-12-30 yanbh ------------------------------------------------------------------------------ Protect Your Site and Customers from Malware Attacks Learn about various malware tactics and how to avoid them. Understand malware threats, the impact they can have on your business, and how you can protect your company and customers by using code signing. http://p.sf.net/sfu/oracle-sfdevnl _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Thomas S H. <tha...@gm...> - 2011-01-15 05:48:53
|
The last thing I want to do start a flame war, but I don't understand how you are setting up your cloud infrastructure where a 2T vm image is a good idea. My initial impression is that there is something about the deployment that would justify that kind of vm image. If you need to access large amounts of data storage from a vm, then the vm should mount the moosefs share, maintaining a virtual machine disk image of that size is bad for performance (every write translates into large scale chunk changes) If you want to use a distributed file system for a cloud infrastructure (and mooseFS is an excellent choice) then I am going to recommended that you limit the writes to the virtual machine image, and offload as much as you can to mounts of the master moosefs filesystem. In my deployments I have all storage class file access on moosefs mounts and it has greatly improved the performance of the filesystem and the virtual machines. -Tom Hatch On Fri, Jan 14, 2011 at 10:19 PM, 颜秉珩 <rw...@12...> wrote: > > we use MFS to create virtual block device, > > A block device larger than 2T is much popular nowadays! > > So I think 2T limitation impacts the MFS application in this area > (cloud storage environment) > > > ------------------------------ > *发件人:* Michal Borychowski > *发送时间:* 2010-12-30 15:25:39 > *收件人:* 'yanbh' > *抄送:* 'moosefs-users' > *主题:* Re: [Moosefs-users] How many mfs clients can be supported by thesame > master at the same time!! > > Hi! > > *From:* yanbh [mailto:ya...@in...] > *Sent:* Thursday, December 30, 2010 7:49 AM > *To:* moosefs-users > *Subject:* [Moosefs-users] How many mfs clients can be supported by the > same master at the same time!! > > > > Dear all! > > > > I have some questions about MFS, as following! > > 1. how many mfsclient can be supported by the same master server at the > same time? > > what about 100 client mounting the same mfs master at the same? what about > the performace? > > *[MB] 100 clients would not be a problem. Performance should also not be > affected (metadata in mfs master are kept in RAM for speed). Please have a > look here: > http://80.48.16.122/mfs.cgi?masterport=9421&mastername=bellona_main§ions=MSfor our installation. > * > > > > 2. we know the mfs can only support a file whose size less than 2T, this is > a bad limitation, when to remove it ? > > any plan? > > *[MB] Is it really a big problem for you? Can’t you divide the files to be > not bigger than 2TB? For the moment removing this limiation doesn’t have big > priority.* > > * * > > > > whatever, Thx for the developers giving us a execlent DFS!! > > *[MB] :)* > > * * > > *If you need any further assistance please let us know.* > > * * > > *Kind regards* > > *Micha?Borychowski * > > *MooseFS Support Manager* > > *_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _* > > *Gemius S.A.* > > *ul. Woska 7, 02-672 Warszawa* > > *Budynek MARS, klatka D* > > *Tel.: +4822 874-41-00* > > *Fax : +4822 874-41-01* > > > > Best > > Regards! > > > > 2010-12-30 > ------------------------------ > > *yanbh* > > > ------------------------------------------------------------------------------ > Protect Your Site and Customers from Malware Attacks > Learn about various malware tactics and how to avoid them. Understand > malware threats, the impact they can have on your business, and how you > can protect your company and customers by using code signing. > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: 颜. <rw...@12...> - 2011-01-15 05:22:45
|
we use MFS to create virtual block device, A block device larger than 2T is much popular nowadays! So I think 2T limitation impacts the MFS application in this area (cloud storage environment) 发件人: Michal Borychowski 发送时间: 2010-12-30 15:25:39 收件人: 'yanbh' 抄送: 'moosefs-users' 主题: Re: [Moosefs-users] How many mfs clients can be supported by thesame master at the same time!! Hi! From: yanbh [mailto:ya...@in...] Sent: Thursday, December 30, 2010 7:49 AM To: moosefs-users Subject: [Moosefs-users] How many mfs clients can be supported by the same master at the same time!! Dear all! I have some questions about MFS, as following! 1. how many mfsclient can be supported by the same master server at the same time? what about 100 client mounting the same mfs master at the same? what about the performace? [MB] 100 clients would not be a problem. Performance should also not be affected (metadata in mfs master are kept in RAM for speed). Please have a look here: http://80.48.16.122/mfs.cgi?masterport=9421&mastername=bellona_main§ions=MS for our installation. 2. we know the mfs can only support a file whose size less than 2T, this is a bad limitation, when to remove it ? any plan? [MB] Is it really a big problem for you? Can’t you divide the files to be not bigger than 2TB? For the moment removing this limiation doesn’t have big priority. whatever, Thx for the developers giving us a execlent DFS!! [MB] :) If you need any further assistance please let us know. Kind regards Micha?Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wo硂ska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Best Regards! 2010-12-30 yanbh |
From: Laurent W. <lw...@hy...> - 2011-01-14 17:06:19
|
On Fri, 14 Jan 2011 11:45:33 -0500 "Scoleri, Steven" <Sco...@gs...> wrote: > Is the website down? > works for me™ -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Scoleri, S. <Sco...@gs...> - 2011-01-14 16:58:44
|
Is the website down? |
From: Michal B. <mic...@ge...> - 2011-01-14 09:13:35
|
MooseFS operates on 64MB chunks. So at the beginning all copies would be created on all the nodes. So first 3GB of data (as 1GB is reserved at node1) would be fully distributed over three nodes. Later the remaining 2GB of data would be written at node2 and node3. So the file would have 3GB of data in three copies but the whole 5GB would be only in two copies. All writes would be successful, the file would be accessible but it would be "undergoal". When you expand space on node1 or attach node4 the system will automatically try copy the remaining 2GB of data so that the file has all its data in full three copies. Regards Michal From: Piotr Skurczak [mailto:pio...@gm...] Sent: Thursday, January 13, 2011 5:11 PM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Make MooseFS write on a certain node Hello! What if for instance I have different size disks dedicated for moosefs , like : node1 : 4 GB node2 : 7 GB node3 : 10 GB then, - what happens if I want to write 5 GB iso file with mfssetgoal = 3 on a directory ? Will only 2 copies be written to node2 and 3 or will I get an error saying capacity warning ? - if MFS chooses the nodes randomly then having presented the situation above then my conclusions on what would happen if I wanted top copy - again 5 GB file are : 1-st option: mfsmaster has chosen node1 and node2 to write the 5 GB file. There's not enough free space. Copy failed. 2-nd option: mfsmaster has chosen node 2 and node3 - write succeeded. 3-rd option: mfsmaster has chosen node3 and node1 - write fails. hence there're 33 % chances that I will save my file on the storage. What about this ? 2011/1/13 Michal Borychowski <mic...@ge...> Hi Piotr! For the moment there is no chunk or rack "awareness". So generally speaking chunks are chosen by random, the only exception is when the client is also a chunk. For more detailed information you can have a look at this post: http://sourceforge.net/mailarchive/message.php?msg_id=26743571 If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Piotr Skurczak [mailto:pio...@gm...] Sent: Thursday, January 13, 2011 10:31 AM To: moo...@li... Subject: [Moosefs-users] Make MooseFS write on a certain node Hello, Quick question. Is it possible to force moosefs, to tell it where a copy should be written ? Or - another way - if I copy a testfile.tar.gz (2 MB) to my mountpoint - can I point on which chunk the data can be written ? My file system is generally a home made high availability samba share with nodes scattered around the world for files < 5 MB. If users from a certain location decide to write only one copy it would be good not to write to a node that is on the other side of the globe, but 5 meters ahead... Peter |
From: Jun C. P. <jun...@gm...> - 2011-01-13 23:27:58
|
Hi, I'm wondering how I can make sure if the recovery process from failure (e.g., missing chunks due to node failure) has been done. Let me consider a little sophisticated scenario, assuming that setgoal is 3 and there are three chunkservers. 1. A chunkserver fails down for 5 minutes. 2. During the 5 mins, there have been a lot of update requests for the existing files and new creation of files, thereby relevant chunks are updated only on the currently alive two nodes. 3. After 5 mins, the failed node comes back, I think the metadata server will recognize that there are some missing chunks and new updates on the existing chunks. I could see via mfsfileinfo and mfscheckfile that the metadata server starts to gradually recover the inconsistent state by creating the missing chunks. However, it was ambiguous to figure out if the new updates of the existing chunks also were synchronized. Is there any specific way or command to see if the whole recovery process is really done including the new updates on chunks? Thanks, -Jun |
From: Piotr S. <pio...@gm...> - 2011-01-13 16:10:39
|
Hello! What if for instance I have different size disks dedicated for moosefs , like : node1 : 4 GB node2 : 7 GB node3 : 10 GB then, - what happens if I want to write 5 GB iso file with mfssetgoal = 3 on a directory ? Will only 2 copies be written to node2 and 3 or will I get an error saying capacity warning ? - if MFS chooses the nodes randomly then having presented the situation above then my conclusions on what would happen if I wanted top copy - again 5 GB file are : 1-st option: mfsmaster has chosen node1 and node2 to write the 5 GB file. There's not enough free space. Copy failed. 2-nd option: mfsmaster has chosen node 2 and node3 - write succeeded. 3-rd option: mfsmaster has chosen node3 and node1 - write fails. hence there're 33 % chances that I will save my file on the storage. What about this ? 2011/1/13 Michal Borychowski <mic...@ge...> > Hi Piotr! > > > > For the moment there is no chunk or rack "awareness". So generally speaking > chunks are chosen by random, the only exception is when the > > client is also a chunk. > > > > For more detailed information you can have a look at this post: > > http://sourceforge.net/mailarchive/message.php?msg_id=26743571 > > > > > > If you need any further assistance please let us know. > > > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > > > > > > > > > *From:* Piotr Skurczak [mailto:pio...@gm...] > *Sent:* Thursday, January 13, 2011 10:31 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] Make MooseFS write on a certain node > > > > Hello, > > > > Quick question. Is it possible to force moosefs, to tell it where a copy > should be written ? > > Or - another way - if I copy a testfile.tar.gz (2 MB) to my mountpoint - > can I point on which chunk the data can be written ? > > > > My file system is generally a home made high availability samba share with > nodes scattered around the world for files < 5 MB. > > If users from a certain location decide to write only one copy it would be > good not to write to a node that is on the other side of the globe, but 5 > meters ahead... > > > > Peter > > > > > > > |
From: Rustam A. <ru...@gm...> - 2011-01-13 12:28:07
|
Hi Steve, Thanks for information. Just to clarify - will it use disk space only (and no RAM at all) or it will mix RAM and disk by keeping most recently used metadata in RAM? Also I noticed that block size is 64KiB. Is there any chance to change this? My files in average are 10K. Many thanks, Rustam. On 13 January 2011 10:05, Steve <st...@bo...> wrote: > > It will use disk space instead. > > > > My 4 chunk server , 100,000 files, 6tb system runs happily with just 256mb > ram in the master of course just 1-2 users!! > > > > > > > > > > -------Original Message------- > > > > From: Rustam Aliyev > > Date: 13/01/2011 08:25:20 > > To: moo...@li... > > Subject: [Moosefs-users] Master node memory limitation > > > > Hello, > > > > > > I'm new to MooseFS and was wondering what will happen if Master server will > not have enough RAM to keep all metadata in memory? > > Does it mean that whole FS will became unavailable if I hit maximum RAM? > > Is there any way to limit RAM usage and load only recent files to memory? > > > > > > In my case I have 15-20 million files and about 80% is not accessed at all > or very rarely. > > > > > > Thanks for help, > > Rustam. > > > > > > > > > > > > > ----------------------------------------------------------------------------- > > > Protect Your Site and Customers from Malware Attacks > > Learn about various malware tactics and how to avoid them. Understand > > malware threats, the impact they can have on your business, and how you > > can protect your company and customers by using code signing. > > http://p.sf.net/sfu/oracle-sfdevnl > > > > > > > > > > > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > |
From: Steve <st...@bo...> - 2011-01-13 10:05:56
|
It will use disk space instead. My 4 chunk server , 100,000 files, 6tb system runs happily with just 256mb ram in the master of course just 1-2 users!! -------Original Message------- From: Rustam Aliyev Date: 13/01/2011 08:25:20 To: moo...@li... Subject: [Moosefs-users] Master node memory limitation Hello, I'm new to MooseFS and was wondering what will happen if Master server will not have enough RAM to keep all metadata in memory? Does it mean that whole FS will became unavailable if I hit maximum RAM? Is there any way to limit RAM usage and load only recent files to memory? In my case I have 15-20 million files and about 80% is not accessed at all or very rarely. Thanks for help, Rustam. ----------------------------------------------------------------------------- Protect Your Site and Customers from Malware Attacks Learn about various malware tactics and how to avoid them. Understand malware threats, the impact they can have on your business, and how you can protect your company and customers by using code signing. http://p.sf.net/sfu/oracle-sfdevnl _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-01-13 09:39:07
|
Hi Piotr! For the moment there is no chunk or rack "awareness". So generally speaking chunks are chosen by random, the only exception is when the client is also a chunk. For more detailed information you can have a look at this post: http://sourceforge.net/mailarchive/message.php?msg_id=26743571 If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Piotr Skurczak [mailto:pio...@gm...] Sent: Thursday, January 13, 2011 10:31 AM To: moo...@li... Subject: [Moosefs-users] Make MooseFS write on a certain node Hello, Quick question. Is it possible to force moosefs, to tell it where a copy should be written ? Or - another way - if I copy a testfile.tar.gz (2 MB) to my mountpoint - can I point on which chunk the data can be written ? My file system is generally a home made high availability samba share with nodes scattered around the world for files < 5 MB. If users from a certain location decide to write only one copy it would be good not to write to a node that is on the other side of the globe, but 5 meters ahead... Peter |
From: Laurent W. <lw...@hy...> - 2011-01-13 09:37:13
|
On Thu, 13 Jan 2011 10:30:49 +0100 Piotr Skurczak <pio...@gm...> wrote: > Hello, Hi, > > Quick question. Is it possible to force moosefs, to tell it where a copy > should be written ? > Or - another way - if I copy a testfile.tar.gz (2 MB) to my mountpoint - can > I point on which chunk the data can be written ? > > My file system is generally a home made high availability samba share with > nodes scattered around the world for files < 5 MB. > If users from a certain location decide to write only one copy it would be > good not to write to a node that is on the other side of the globe, but 5 > meters ahead... > > Peter It's not yet possible. Chunks are written on random chunkservers minus space remaining for load balance. Location awareness is on the roadmap, though there's no fixed date nor version for implementation. HTH, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Piotr S. <pio...@gm...> - 2011-01-13 09:30:56
|
Hello, Quick question. Is it possible to force moosefs, to tell it where a copy should be written ? Or - another way - if I copy a testfile.tar.gz (2 MB) to my mountpoint - can I point on which chunk the data can be written ? My file system is generally a home made high availability samba share with nodes scattered around the world for files < 5 MB. If users from a certain location decide to write only one copy it would be good not to write to a node that is on the other side of the globe, but 5 meters ahead... Peter |
From: Laurent W. <lw...@hy...> - 2011-01-13 08:57:21
|
On Thu, 13 Jan 2011 01:43:36 +0000 Rustam Aliyev <ru...@gm...> wrote: > Hello, Hi, > > I'm new to MooseFS and was wondering what will happen if Master server will > not have enough RAM to keep all metadata in memory? Machine will start to swap, and OOM killer will hit if ram+swap is not enough. I guess you understand the consequences ;) > Does it mean that whole FS will became unavailable if I hit maximum RAM? > Is there any way to limit RAM usage and load only recent files to memory? It's not possible (yet ?). AFAIK this is not on roadmap. Devs may accept such a patch. HTH, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Rustam A. <ru...@gm...> - 2011-01-13 01:43:43
|
Hello, I'm new to MooseFS and was wondering what will happen if Master server will not have enough RAM to keep all metadata in memory? Does it mean that whole FS will became unavailable if I hit maximum RAM? Is there any way to limit RAM usage and load only recent files to memory? In my case I have 15-20 million files and about 80% is not accessed at all or very rarely. Thanks for help, Rustam. |
From: Jun C. P. <jun...@gm...> - 2011-01-12 21:07:39
|
Just more clarifications for those who can answer my questions: > In particular, I am very interested in the consistency model > comparison between them. The main reason to need to know the detailed consistency model is that the consistency model typically implies associated requirements on applications. The GFS paper clearly describes what guarantees in terms of concurrency are provides by GFS, thereby what is required for applications. Especially, for update requests of files (chunks) with concurrent reads from multiple users, there must be a clear description on how they will be synchronized. Otherwise, it is too risky to make applications that may cause unknown problems. Of course, if no worries needed on MFS, it would be more than welcome, though. -Jun On Wed, Jan 12, 2011 at 11:35 AM, Jun Cheol Park <jun...@gm...> wrote: > Hi, > > While reading the original full paper on the Google file system (GFS) > SOSP'03, I got a strong impression that MFS inherited almost all the > main philosophies of GFS (chunk-based data pipeline, copy-on-write, > lazy deletion, in-memory metadata managment, data integrity using > checksum, etc). Even all the terminologies (metadata server, > chunkserver, etc) seemed to be borrowed from GFS. However, I could not > find any document on MooseFS mentioning that MFS was motivated by GFS. > So I don't know if MFS was indeed motivated by GFS or not. > > Anyway, even with such lots of similarities that I found, there are > some interesting differences between them. For instance, MFS is > POSIX-compliant while GFS not. MFS uses a random placement of replicas > vs GFS uses the concept of "closest chunkserver first." > > However, it was not easy to find what other differences could be and > what advantages of MFS over GFS exist in the current implementation of > MFS. In particular, I am very interested in the consistency model > comparison between them. > > Is there anyone that can give me comments on my curiosities or > introduce some documents that describe the differences between them? > > Thanks in advance, > > -Jun > |
From: Jun C. P. <jun...@gm...> - 2011-01-12 18:36:00
|
Hi, While reading the original full paper on the Google file system (GFS) SOSP'03, I got a strong impression that MFS inherited almost all the main philosophies of GFS (chunk-based data pipeline, copy-on-write, lazy deletion, in-memory metadata managment, data integrity using checksum, etc). Even all the terminologies (metadata server, chunkserver, etc) seemed to be borrowed from GFS. However, I could not find any document on MooseFS mentioning that MFS was motivated by GFS. So I don't know if MFS was indeed motivated by GFS or not. Anyway, even with such lots of similarities that I found, there are some interesting differences between them. For instance, MFS is POSIX-compliant while GFS not. MFS uses a random placement of replicas vs GFS uses the concept of "closest chunkserver first." However, it was not easy to find what other differences could be and what advantages of MFS over GFS exist in the current implementation of MFS. In particular, I am very interested in the consistency model comparison between them. Is there anyone that can give me comments on my curiosities or introduce some documents that describe the differences between them? Thanks in advance, -Jun |
From: Piotr S. <pio...@gm...> - 2011-01-12 12:37:14
|
Hello, Thanks for your answer. Yes I created 2 GB chunk space and it works now!! On Wed, Jan 12, 2011 at 1:13 PM, Leonid Satanovsky <leo...@ar...>wrote: > Greetings! > The problem was recently discussed: > you must have more than 1Gb of free space on a chunk server for the data to > be written to it (see documentation!), > so that's where come the zero lenght files you get. > Errors of failed writing operation may not be reported not because of > MuseFS, but because > the programs (shell redirection, nano text editor and more) are just not > checking for success of failure. > Try vim text editor to create a file at the filesystem - it will report > error to you. > So, create chunk storage of more than 1Gb capacity. > --------------- > And you should NOT touch 'weird' directories - that's the backend structure > of MuseFS. > > > > > > > ----- Original Message ----- From: "Piotr Skurczak" < > pio...@gm...> > To: <moo...@li...> > Sent: Sunday, January 09, 2011 2:19 AM > Subject: [Moosefs-users] mfs access problem > > > Hello everyone! >> >> So, I'm quite new to MooseFS, I've installed the whole system recently, >> including master and chunk deamon, etc. >> The problem I encountered (and can't get through) is most probably the >> access/permission problem. >> >> This is how the situation looks like : >> >> I have 3 servers, 1 of them is VPS (what means it has no access to >> /dev/fuse, nor /dev/loop), 2 are real servers having WAN ip and etc. >> I decided to use VPS as mfsmaster, and the rest would be chunk servers. I >> haven't got separate disks, so I used /dev/loop on chunk servers to create >> 500 MB ext3 file system and mounted it, as it has been described in the >> manual. >> >> # df -m >> Filesystem 1M-blocks Used Available Use% Mounted on >> /data/mfsdisk 462 11 428 3% /mnt/mfschunk >> >> Now, when I run the master server, and chunks, and on one of them I use so >> called 'client' (mfsmount) and - everything works : >> >> # df -m >> /data/mfsdisk 462 11 428 3% /mnt/mfschunk >> mfsmaster:9421 343 0 343 0% /mnt/mfs >> >> except of that I can't write to /mnt/mfs. Basically I managed to touch >> empty >> files as well as to create empty directories, but no sooner do I echo >> anything to any files than it gives me an error : >> No space left on the device. >> >> Sometimes it does not say anything, for instance : >> >> [root@chunk mfs]# touch file >> [root@chunk mfs]# echo "test" >> file >> [root@chunk mfs]# ls -la file >> -rw-r--r-- 1 999 999 0 Jan 9 00:13 file >> >> What I have noticed is that under /mnt/mfschunk (where the pseudo hdd is >> mounted) there are lots of weird directories. Not sure if I ought to touch >> them... >> 00 07 0E 15 1C 23 2A 31 38 3F 46 4D 54 >> >> I would very much appreciate any help in this matter. >> >> Kind Regards, >> Peter >> >> > > > -------------------------------------------------------------------------------- > > > ------------------------------------------------------------------------------ >> Gaining the trust of online customers is vital for the success of any >> company >> that requires sensitive data to be transmitted over the Web. Learn how >> to >> best implement a security strategy that keeps consumers' information >> secure >> and instills the confidence they need to proceed with transactions. >> http://p.sf.net/sfu/oracle-sfdevnl >> > > > > -------------------------------------------------------------------------------- > > > _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > |
From: Leonid S. <leo...@ar...> - 2011-01-12 12:13:54
|
Greetings! The problem was recently discussed: you must have more than 1Gb of free space on a chunk server for the data to be written to it (see documentation!), so that's where come the zero lenght files you get. Errors of failed writing operation may not be reported not because of MuseFS, but because the programs (shell redirection, nano text editor and more) are just not checking for success of failure. Try vim text editor to create a file at the filesystem - it will report error to you. So, create chunk storage of more than 1Gb capacity. --------------- And you should NOT touch 'weird' directories - that's the backend structure of MuseFS. ----- Original Message ----- From: "Piotr Skurczak" <pio...@gm...> To: <moo...@li...> Sent: Sunday, January 09, 2011 2:19 AM Subject: [Moosefs-users] mfs access problem > Hello everyone! > > So, I'm quite new to MooseFS, I've installed the whole system recently, > including master and chunk deamon, etc. > The problem I encountered (and can't get through) is most probably the > access/permission problem. > > This is how the situation looks like : > > I have 3 servers, 1 of them is VPS (what means it has no access to > /dev/fuse, nor /dev/loop), 2 are real servers having WAN ip and etc. > I decided to use VPS as mfsmaster, and the rest would be chunk servers. I > haven't got separate disks, so I used /dev/loop on chunk servers to create > 500 MB ext3 file system and mounted it, as it has been described in the > manual. > > # df -m > Filesystem 1M-blocks Used Available Use% Mounted on > /data/mfsdisk 462 11 428 3% /mnt/mfschunk > > Now, when I run the master server, and chunks, and on one of them I use so > called 'client' (mfsmount) and - everything works : > > # df -m > /data/mfsdisk 462 11 428 3% /mnt/mfschunk > mfsmaster:9421 343 0 343 0% /mnt/mfs > > except of that I can't write to /mnt/mfs. Basically I managed to touch > empty > files as well as to create empty directories, but no sooner do I echo > anything to any files than it gives me an error : > No space left on the device. > > Sometimes it does not say anything, for instance : > > [root@chunk mfs]# touch file > [root@chunk mfs]# echo "test" >> file > [root@chunk mfs]# ls -la file > -rw-r--r-- 1 999 999 0 Jan 9 00:13 file > > What I have noticed is that under /mnt/mfschunk (where the pseudo hdd is > mounted) there are lots of weird directories. Not sure if I ought to touch > them... > 00 07 0E 15 1C 23 2A 31 38 3F 46 4D 54 > > I would very much appreciate any help in this matter. > > Kind Regards, > Peter > -------------------------------------------------------------------------------- > ------------------------------------------------------------------------------ > Gaining the trust of online customers is vital for the success of any > company > that requires sensitive data to be transmitted over the Web. Learn how > to > best implement a security strategy that keeps consumers' information > secure > and instills the confidence they need to proceed with transactions. > http://p.sf.net/sfu/oracle-sfdevnl -------------------------------------------------------------------------------- > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Michal B. <mic...@ge...> - 2011-01-10 08:27:19
|
Hi! Actually there are no such complicated calculations. For more details please check a thread here: http://sourceforge.net/mailarchive/message.php?msg_id=26743571 Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: seven [mailto:li...@gy...] Sent: Monday, January 10, 2011 2:21 AM To: moo...@li... Subject: [Moosefs-users] Another question about Mfs Hi,all! Now,I have another question about Mfs,and hope someone can help me. In my mind, when the master receive a request,he will distribute the requeset by some calculations, like load balance,Now I want to kown what can affect the calculation, CPU? Load? Mem? Disk space? Bandwidth? If the Disk space could affect it, and how much cent proportion it affects? Regards Seven -- Seven System Group E-mail:li...@gy... MSN: sev...@ho... ---------------------------------------------------------------------------- -- Gaining the trust of online customers is vital for the success of any company that requires sensitive data to be transmitted over the Web. Learn how to best implement a security strategy that keeps consumers' information secure and instills the confidence they need to proceed with transactions. http://p.sf.net/sfu/oracle-sfdevnl _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: seven <li...@gy...> - 2011-01-10 01:21:36
|
Hi,all! Now,I have another question about Mfs,and hope someone can help me. In my mind, when the master receive a request,he will distribute the requeset by some calculations, like load balance,Now I want to kown what can affect the calculation, CPU? Load? Mem? Disk space? Bandwidth? If the Disk space could affect it, and how much cent proportion it affects? Regards Seven -- Seven System Group E-mail:li...@gy... MSN: sev...@ho... |
From: Piotr S. <pio...@gm...> - 2011-01-08 23:20:05
|
Hello everyone! So, I'm quite new to MooseFS, I've installed the whole system recently, including master and chunk deamon, etc. The problem I encountered (and can't get through) is most probably the access/permission problem. This is how the situation looks like : I have 3 servers, 1 of them is VPS (what means it has no access to /dev/fuse, nor /dev/loop), 2 are real servers having WAN ip and etc. I decided to use VPS as mfsmaster, and the rest would be chunk servers. I haven't got separate disks, so I used /dev/loop on chunk servers to create 500 MB ext3 file system and mounted it, as it has been described in the manual. # df -m Filesystem 1M-blocks Used Available Use% Mounted on /data/mfsdisk 462 11 428 3% /mnt/mfschunk Now, when I run the master server, and chunks, and on one of them I use so called 'client' (mfsmount) and - everything works : # df -m /data/mfsdisk 462 11 428 3% /mnt/mfschunk mfsmaster:9421 343 0 343 0% /mnt/mfs except of that I can't write to /mnt/mfs. Basically I managed to touch empty files as well as to create empty directories, but no sooner do I echo anything to any files than it gives me an error : No space left on the device. Sometimes it does not say anything, for instance : [root@chunk mfs]# touch file [root@chunk mfs]# echo "test" >> file [root@chunk mfs]# ls -la file -rw-r--r-- 1 999 999 0 Jan 9 00:13 file What I have noticed is that under /mnt/mfschunk (where the pseudo hdd is mounted) there are lots of weird directories. Not sure if I ought to touch them... 00 07 0E 15 1C 23 2A 31 38 3F 46 4D 54 I would very much appreciate any help in this matter. Kind Regards, Peter |
From: Thomas S H. <tha...@gm...> - 2011-01-04 19:06:18
|
Wow, I am sorry, I forgot to get back to the list about this one. Ok, the devs have informed me that this has been fixed in the upstream code for the next release, but it was really my fault. I had set the chunkserver reconnect value to be too low in the mfschunkserver.cfg files, this was causing the chunkservers to come back too quickly was causing the mfsmaster to take up %100 cpu spinning it's wheels. So don't set: MASTER_TIMEOUT to anything below 10, and ten is cutting it close, I would stay above 30 unless your install is very small (fewer than 5 chunkservers) On Fri, Dec 31, 2010 at 12:13 AM, Laurent Wandrebeck <lw...@hy...> wrote: > On Thu, 30 Dec 2010 13:46:47 -0700 > Thomas S Hatch <tha...@gm...> wrote: > > > We are getting these kernel errors: > <snip> > page allocation failure looks like some malloc failed. > Check mfsmaster ram consumption. how much ram does your master box > have ? Are you in 32 or 64 bits mode ? > HTH, > -- > Laurent Wandrebeck > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C > D17C F64C > > > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, > and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |