From: Ioannis A. <ias...@fl...> - 2010-09-08 16:37:35
|
Hello, I am testing out MooseFS for around 50 to 100 TeraBytes of data. I have been successful to set up the whole environment. It was pretty quick and easy actually. I was able to replicate with goal=3 and it worked really nicely. At this point, there is only one requirement that I was not able to accomplish. I require to have 3 copies of a certain chunk, but my storage machines are distributed in two points of presence. I require that each of the points of presence contains at least one copy of the chunks. This is fine when you have 3 chunk servers, but it won't work if you have 6 chunk servers. The scenario is the following: POP1: 4 chunk servers (need 2 replicas here) POP2: 2 chunk servers (need 1 replica here) I need this because if the whole POP1 or the whole POP2 go down, I need to still be able to access the contents. Writes are normally only performed in POP1, so there are normally only reads in POP2. The situation is worse if I add 2 more chunk servers in POP1 and 1 more chunk server in POP2. Is there a way to somehow tell MooseFS that the 4 chunk servers of POP1 are in one group and that there should be at least 1 replica in this group and that the 2 chunk servers of POP2 are in another group and that there should be at least 1 replica in this group? Is there any way to accomplish this? Regards. -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |
From: Kristofer P. <kri...@cy...> - 2010-09-08 17:45:04
|
According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). ----- Original Message ----- From: "Ioannis Aslanidis" <ias...@fl...> To: moo...@li... Sent: Wednesday, September 8, 2010 11:37:08 AM Subject: [Moosefs-users] Grouping chunk servers Hello, I am testing out MooseFS for around 50 to 100 TeraBytes of data. I have been successful to set up the whole environment. It was pretty quick and easy actually. I was able to replicate with goal=3 and it worked really nicely. At this point, there is only one requirement that I was not able to accomplish. I require to have 3 copies of a certain chunk, but my storage machines are distributed in two points of presence. I require that each of the points of presence contains at least one copy of the chunks. This is fine when you have 3 chunk servers, but it won't work if you have 6 chunk servers. The scenario is the following: POP1: 4 chunk servers (need 2 replicas here) POP2: 2 chunk servers (need 1 replica here) I need this because if the whole POP1 or the whole POP2 go down, I need to still be able to access the contents. Writes are normally only performed in POP1, so there are normally only reads in POP2. The situation is worse if I add 2 more chunk servers in POP1 and 1 more chunk server in POP2. Is there a way to somehow tell MooseFS that the 4 chunk servers of POP1 are in one group and that there should be at least 1 replica in this group and that the 2 chunk servers of POP2 are in another group and that there should be at least 1 replica in this group? Is there any way to accomplish this? Regards. -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 ------------------------------------------------------------------------------ This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ioannis A. <ias...@fl...> - 2010-09-08 18:17:36
|
Thank you for the answer. The only thing I need to know now is when to expect this, but at least I know it will be done some day. On Wed, Sep 8, 2010 at 7:44 PM, Kristofer Pettijohn <kri...@cy...> wrote: > According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: > > "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). > > ----- Original Message ----- > From: "Ioannis Aslanidis" <ias...@fl...> > To: moo...@li... > Sent: Wednesday, September 8, 2010 11:37:08 AM > Subject: [Moosefs-users] Grouping chunk servers > > Hello, > > I am testing out MooseFS for around 50 to 100 TeraBytes of data. > > I have been successful to set up the whole environment. It was pretty > quick and easy actually. I was able to replicate with goal=3 and it > worked really nicely. > > At this point, there is only one requirement that I was not able to > accomplish. I require to have 3 copies of a certain chunk, but my > storage machines are distributed in two points of presence. > > I require that each of the points of presence contains at least one > copy of the chunks. This is fine when you have 3 chunk servers, but it > won't work if you have 6 chunk servers. The scenario is the following: > > POP1: 4 chunk servers (need 2 replicas here) > POP2: 2 chunk servers (need 1 replica here) > > I need this because if the whole POP1 or the whole POP2 go down, I > need to still be able to access the contents. Writes are normally only > performed in POP1, so there are normally only reads in POP2. > > The situation is worse if I add 2 more chunk servers in POP1 and 1 > more chunk server in POP2. > > Is there a way to somehow tell MooseFS that the 4 chunk servers of > POP1 are in one group and that there should be at least 1 replica in > this group and that the 2 chunk servers of POP2 are in another group > and that there should be at least 1 replica in this group? > > Is there any way to accomplish this? > > Regards. > > -- > Ioannis Aslanidis > System and Network Administrator > Flumotion Services, S.A. > > sys...@fl... > > Office Phone: +34 93 508 63 59 > Mobile Phone: +34 672 20 45 75 > > ------------------------------------------------------------------------------ > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |
From: Michał B. <mic...@ge...> - 2010-09-13 09:59:11
|
Hi! Yes, location awareness is on our roadmap. I cannot tell exactly when it would be implemented but probably it would be implemented quite soon :) Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Ioannis Aslanidis [mailto:ias...@fl...] Sent: Wednesday, September 08, 2010 8:17 PM To: Kristofer Pettijohn Cc: moo...@li... Subject: Re: [Moosefs-users] Grouping chunk servers Thank you for the answer. The only thing I need to know now is when to expect this, but at least I know it will be done some day. On Wed, Sep 8, 2010 at 7:44 PM, Kristofer Pettijohn <kri...@cy...> wrote: > According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: > > "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). > > ----- Original Message ----- > From: "Ioannis Aslanidis" <ias...@fl...> > To: moo...@li... > Sent: Wednesday, September 8, 2010 11:37:08 AM > Subject: [Moosefs-users] Grouping chunk servers > > Hello, > > I am testing out MooseFS for around 50 to 100 TeraBytes of data. > > I have been successful to set up the whole environment. It was pretty > quick and easy actually. I was able to replicate with goal=3 and it > worked really nicely. > > At this point, there is only one requirement that I was not able to > accomplish. I require to have 3 copies of a certain chunk, but my > storage machines are distributed in two points of presence. > > I require that each of the points of presence contains at least one > copy of the chunks. This is fine when you have 3 chunk servers, but it > won't work if you have 6 chunk servers. The scenario is the following: > > POP1: 4 chunk servers (need 2 replicas here) > POP2: 2 chunk servers (need 1 replica here) > > I need this because if the whole POP1 or the whole POP2 go down, I > need to still be able to access the contents. Writes are normally only > performed in POP1, so there are normally only reads in POP2. > > The situation is worse if I add 2 more chunk servers in POP1 and 1 > more chunk server in POP2. > > Is there a way to somehow tell MooseFS that the 4 chunk servers of > POP1 are in one group and that there should be at least 1 replica in > this group and that the 2 chunk servers of POP2 are in another group > and that there should be at least 1 replica in this group? > > Is there any way to accomplish this? > > Regards. > > -- > Ioannis Aslanidis > System and Network Administrator > Flumotion Services, S.A. > > sys...@fl... > > Office Phone: +34 93 508 63 59 > Mobile Phone: +34 672 20 45 75 > > ---------------------------------------------------------------------------- -- > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 ---------------------------------------------------------------------------- -- This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <akh...@ri...> - 2010-09-15 11:48:58
|
Hi all! As far as I understand "Location awareness" is not exactly what Ioannis expects. In scenario when whole POP1 goes down and Metadata server was located in POP1 we have data ambiguity because POP1 may be just disconnected by WAN network failure and real Metadata server may be alive. So, in this case in POP2 we can't promote Metalogger to Master role. wbr Alexander Akhobadze ====================================================== Вы писали 8 сентября 2010 г., 21:44:50: ====================================================== According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). ----- Original Message ----- From: "Ioannis Aslanidis" <ias...@fl...> To: moo...@li... Sent: Wednesday, September 8, 2010 11:37:08 AM Subject: [Moosefs-users] Grouping chunk servers Hello, I am testing out MooseFS for around 50 to 100 TeraBytes of data. I have been successful to set up the whole environment. It was pretty quick and easy actually. I was able to replicate with goal=3 and it worked really nicely. At this point, there is only one requirement that I was not able to accomplish. I require to have 3 copies of a certain chunk, but my storage machines are distributed in two points of presence. I require that each of the points of presence contains at least one copy of the chunks. This is fine when you have 3 chunk servers, but it won't work if you have 6 chunk servers. The scenario is the following: POP1: 4 chunk servers (need 2 replicas here) POP2: 2 chunk servers (need 1 replica here) I need this because if the whole POP1 or the whole POP2 go down, I need to still be able to access the contents. Writes are normally only performed in POP1, so there are normally only reads in POP2. The situation is worse if I add 2 more chunk servers in POP1 and 1 more chunk server in POP2. Is there a way to somehow tell MooseFS that the 4 chunk servers of POP1 are in one group and that there should be at least 1 replica in this group and that the 2 chunk servers of POP2 are in another group and that there should be at least 1 replica in this group? Is there any way to accomplish this? Regards. |
From: Ioannis A. <ias...@fl...> - 2010-12-03 17:44:52
|
Hello, Any updates on this feature? Do you think it'll be ready soon? Best regards. 2010/9/15 Alexander Akhobadze <akh...@ri...>: > > Hi all! > > As far as I understand "Location awareness" is not exactly what Ioannis expects. > In scenario when whole POP1 goes down and Metadata server was located in POP1 > we have data ambiguity because POP1 may be just disconnected by WAN network failure > and real Metadata server may be alive. So, in this case in POP2 we can't > promote Metalogger to Master role. > > wbr > Alexander Akhobadze > > ====================================================== > Вы писали 8 сентября 2010 г., 21:44:50: > ====================================================== > > According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: > > "Location awareness" of chunkserver - optional file mapping IP_address->location_number. > As a location we understand a rack in which the chunkserver is located. > The system would then be able to optimize some operations > (eg. prefer chunk copy which is located in the same rack). > > ----- Original Message ----- > From: "Ioannis Aslanidis" <ias...@fl...> > To: moo...@li... > Sent: Wednesday, September 8, 2010 11:37:08 AM > Subject: [Moosefs-users] Grouping chunk servers > > Hello, > > I am testing out MooseFS for around 50 to 100 TeraBytes of data. > > I have been successful to set up the whole environment. It was pretty > quick and easy actually. I was able to replicate with goal=3 and it > worked really nicely. > > At this point, there is only one requirement that I was not able to > accomplish. I require to have 3 copies of a certain chunk, but my > storage machines are distributed in two points of presence. > > I require that each of the points of presence contains at least one > copy of the chunks. This is fine when you have 3 chunk servers, but it > won't work if you have 6 chunk servers. The scenario is the following: > > POP1: 4 chunk servers (need 2 replicas here) > POP2: 2 chunk servers (need 1 replica here) > > I need this because if the whole POP1 or the whole POP2 go down, I > need to still be able to access the contents. Writes are normally only > performed in POP1, so there are normally only reads in POP2. > > The situation is worse if I add 2 more chunk servers in POP1 and 1 > more chunk server in POP2. > > Is there a way to somehow tell MooseFS that the 4 chunk servers of > POP1 are in one group and that there should be at least 1 replica in > this group and that the 2 chunk servers of POP2 are in another group > and that there should be at least 1 replica in this group? > > Is there any way to accomplish this? > > Regards. > > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. E-Mail: iaslanidis at flumotion dot com Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |
From: Michał B. <mic...@ge...> - 2010-12-03 17:59:22
|
Unfortunately not yet. There would some improvements in metaloggers in 1.6.18 Kind regards Michał Borychowski -----Original Message----- From: Ioannis Aslanidis [mailto:ias...@fl...] Sent: Friday, December 03, 2010 6:44 PM To: moo...@li... Subject: Re: [Moosefs-users] Grouping chunk servers Hello, Any updates on this feature? Do you think it'll be ready soon? Best regards. 2010/9/15 Alexander Akhobadze <akh...@ri...>: > > Hi all! > > As far as I understand "Location awareness" is not exactly what Ioannis expects. > In scenario when whole POP1 goes down and Metadata server was located in POP1 > we have data ambiguity because POP1 may be just disconnected by WAN network failure > and real Metadata server may be alive. So, in this case in POP2 we can't > promote Metalogger to Master role. > > wbr > Alexander Akhobadze > > ====================================================== > Вы писали 8 сентября 2010 г., 21:44:50: > ====================================================== > > According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: > > "Location awareness" of chunkserver - optional file mapping IP_address->location_number. > As a location we understand a rack in which the chunkserver is located. > The system would then be able to optimize some operations > (eg. prefer chunk copy which is located in the same rack). > > ----- Original Message ----- > From: "Ioannis Aslanidis" <ias...@fl...> > To: moo...@li... > Sent: Wednesday, September 8, 2010 11:37:08 AM > Subject: [Moosefs-users] Grouping chunk servers > > Hello, > > I am testing out MooseFS for around 50 to 100 TeraBytes of data. > > I have been successful to set up the whole environment. It was pretty > quick and easy actually. I was able to replicate with goal=3 and it > worked really nicely. > > At this point, there is only one requirement that I was not able to > accomplish. I require to have 3 copies of a certain chunk, but my > storage machines are distributed in two points of presence. > > I require that each of the points of presence contains at least one > copy of the chunks. This is fine when you have 3 chunk servers, but it > won't work if you have 6 chunk servers. The scenario is the following: > > POP1: 4 chunk servers (need 2 replicas here) > POP2: 2 chunk servers (need 1 replica here) > > I need this because if the whole POP1 or the whole POP2 go down, I > need to still be able to access the contents. Writes are normally only > performed in POP1, so there are normally only reads in POP2. > > The situation is worse if I add 2 more chunk servers in POP1 and 1 > more chunk server in POP2. > > Is there a way to somehow tell MooseFS that the 4 chunk servers of > POP1 are in one group and that there should be at least 1 replica in > this group and that the 2 chunk servers of POP2 are in another group > and that there should be at least 1 replica in this group? > > Is there any way to accomplish this? > > Regards. > > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. E-Mail: iaslanidis at flumotion dot com Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 ------------------------------------------------------------------------------ Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! Tap into the largest installed PC base & get more eyes on your game by optimizing for Intel(R) Graphics Technology. Get started today with the Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. http://p.sf.net/sfu/intelisp-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: jose m. <let...@us...> - 2010-12-03 19:14:43
|
El vie, 03-12-2010 a las 18:44 +0100, Ioannis Aslanidis escribió: > Hello, > > Any updates on this feature? Do you think it'll be ready soon? > > Best regards. > * Until the feature "Goal per Rac, and it is not foreseen" not "Location awareness" comes, Pater Noster qui es in Polonia .............. you need N independent cluster's, and rsync or similar is the only pseudo solution, PYrsyncD "Python Inotify Rsync Daemon" or other, if he wants it in pseudo(real-time) after inotify events. |
From: Ioannis A. <ias...@fl...> - 2010-12-06 21:07:30
|
You can always use my software for that... pylsyncd : http://www.deathwing00.org/wordpress/?page_id=199 On Fri, Dec 3, 2010 at 8:14 PM, jose maria <let...@us...> wrote: > El vie, 03-12-2010 a las 18:44 +0100, Ioannis Aslanidis escribió: >> Hello, >> >> Any updates on this feature? Do you think it'll be ready soon? >> >> Best regards. >> > > * Until the feature "Goal per Rac, and it is not foreseen" not "Location > awareness" comes, > Pater Noster qui es in Polonia .............. > you need N independent cluster's, and rsync or similar is the only > pseudo solution, PYrsyncD "Python Inotify Rsync Daemon" or other, if he > wants it in pseudo(real-time) after inotify events. > > > > > > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! > Tap into the largest installed PC base & get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. E-Mail: iaslanidis at flumotion dot com Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |