From: Steve W. <st...@pu...> - 2011-03-07 15:42:42
|
Hi, I understand that there's some consideration being given to implementing a redundant master server for failover purposes. I've not seen this show up yet on the MooseFS roadmap, though. Are there any plans for building failover into MooseFS and, if so, is there a rough idea of the timeline involved (i.e., are we talking about six months or two years or...)? Thanks! Steve -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: Michal B. <mic...@ge...> - 2011-03-08 09:59:45
|
Hi As you know a redundant master functionality is right now not a built-in functionality. This subject is for us very crucial because we know how important this is and we receive lots of requests about it from many sources. Unfortunately I cannot give a date when it would be ready. 6 months is not rather possible. We've been using MooseFS for more than 5 years and it appears that in real it is not that big problem. For the moment we recommend using solutions with eg. ucarp. Please read this tutorial from Thomas: http://sourceforge.net/mailarchive/message.php?msg_id=26804911 Kind regards Michal -----Original Message----- From: Steve Wilson [mailto:st...@pu...] Sent: Monday, March 07, 2011 4:43 PM To: moo...@li... Subject: [Moosefs-users] Question re: master server failover Hi, I understand that there's some consideration being given to implementing a redundant master server for failover purposes. I've not seen this show up yet on the MooseFS roadmap, though. Are there any plans for building failover into MooseFS and, if so, is there a rough idea of the timeline involved (i.e., are we talking about six months or two years or...)? Thanks! Steve -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 ---------------------------------------------------------------------------- -- What You Don't Know About Data Connectivity CAN Hurt You This paper provides an overview of data connectivity, details its effect on application quality, and explores various alternative solutions. http://p.sf.net/sfu/progress-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Thomas S H. <tha...@gm...> - 2011-03-08 15:19:17
|
Keep in mind Steve that my solution, as presented in the mailing list is not perfect, and I have found a number of bugs in it. I am currently working in a second generation system, which is much cleaner and more robust. Once I have the new system tested to a level in which I am satisfied that it is a truly viable option I will be publishing it on github, unfortunately the hardware I have been using for testing the failover system has been unavailable to me for testing, and my new test hardware is still a month away. But, since this is obviously such a big deal to so many people, I will try to get the code that I have up on github as quickly as possible. I have written a small daemon in python that fixes many of the problems in migration, and my tests so far have been flawless(which is why I need to run more tests, I can't imagine that it is really working as well as I think it is). This is probably the only real shortcoming of MooseFS, but I still think that MooseFS is the best option available for distributed filesystems today, and while Ceph is a very promising project, our tests have concluded that it is still a ways out from being production ready. Also, if you have seen any of my previous posts on the matter, I am not a big fan of GlusterFS, we experienced rampant... RAMPANT data corruption, and the reliability was far below sub standard. I hope I can be helpful, and if you (or anyone else) are interested in assisting with failover testing I must admit that I would be more motivated to get what I have out, I will post as soon as I can get it all up to github. -Thomas S Hatch -Systems Engineer, Beyond Oblivion On Tue, Mar 8, 2011 at 2:59 AM, Michal Borychowski < mic...@ge...> wrote: > Hi > > As you know a redundant master functionality is right now not a built-in > functionality. This subject is for us very crucial because we know how > important this is and we receive lots of requests about it from many > sources. Unfortunately I cannot give a date when it would be ready. 6 > months > is not rather possible. > > We've been using MooseFS for more than 5 years and it appears that in real > it is not that big problem. For the moment we recommend using solutions > with > eg. ucarp. > > Please read this tutorial from Thomas: > http://sourceforge.net/mailarchive/message.php?msg_id=26804911 > > > Kind regards > Michal > > > > -----Original Message----- > From: Steve Wilson [mailto:st...@pu...] > Sent: Monday, March 07, 2011 4:43 PM > To: moo...@li... > Subject: [Moosefs-users] Question re: master server failover > > Hi, > > I understand that there's some consideration being given to implementing > a redundant master server for failover purposes. I've not seen this > show up yet on the MooseFS roadmap, though. Are there any plans for > building failover into MooseFS and, if so, is there a rough idea of the > timeline involved (i.e., are we talking about six months or two years > or...)? > > Thanks! > > Steve > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > > > ---------------------------------------------------------------------------- > -- > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Steve W. <st...@pu...> - 2011-03-08 15:37:56
|
Thanks for your responses, Michal and Thomas. And I look forward to seeing the second generation of your failover solution, Thomas. Regards, Steve On 03/08/2011 10:19 AM, Thomas S Hatch wrote: > Keep in mind Steve that my solution, as presented in the mailing list > is not perfect, and I have found a number of bugs in it. > > I am currently working in a second generation system, which is much > cleaner and more robust. > > Once I have the new system tested to a level in which I am satisfied > that it is a truly viable option I will be publishing it on github, > unfortunately the hardware I have been using for testing the failover > system has been unavailable to me for testing, and my new test > hardware is still a month away. > > But, since this is obviously such a big deal to so many people, I will > try to get the code that I have up on github as quickly as possible. I > have written a small daemon in python that fixes many of the problems > in migration, and my tests so far have been flawless(which is why I > need to run more tests, I can't imagine that it is really working as > well as I think it is). > > This is probably the only real shortcoming of MooseFS, but I still > think that MooseFS is the best option available for distributed > filesystems today, and while Ceph is a very promising project, our > tests have concluded that it is still a ways out from being production > ready. Also, if you have seen any of my previous posts on the matter, > I am not a big fan of GlusterFS, we experienced rampant... RAMPANT > data corruption, and the reliability was far below sub standard. > > I hope I can be helpful, and if you (or anyone else) are interested in > assisting with failover testing I must admit that I would be more > motivated to get what I have out, I will post as soon as I can get it > all up to github. > > -Thomas S Hatch > -Systems Engineer, Beyond Oblivion > > On Tue, Mar 8, 2011 at 2:59 AM, Michal Borychowski > <mic...@ge... <mailto:mic...@ge...>> > wrote: > > Hi > > As you know a redundant master functionality is right now not a > built-in > functionality. This subject is for us very crucial because we know how > important this is and we receive lots of requests about it from many > sources. Unfortunately I cannot give a date when it would be > ready. 6 months > is not rather possible. > > We've been using MooseFS for more than 5 years and it appears that > in real > it is not that big problem. For the moment we recommend using > solutions with > eg. ucarp. > > Please read this tutorial from Thomas: > http://sourceforge.net/mailarchive/message.php?msg_id=26804911 > > > Kind regards > Michal > > > > -----Original Message----- > From: Steve Wilson [mailto:st...@pu... > <mailto:st...@pu...>] > Sent: Monday, March 07, 2011 4:43 PM > To: moo...@li... > <mailto:moo...@li...> > Subject: [Moosefs-users] Question re: master server failover > > Hi, > > I understand that there's some consideration being given to > implementing > a redundant master server for failover purposes. I've not seen this > show up yet on the MooseFS roadmap, though. Are there any plans for > building failover into MooseFS and, if so, is there a rough idea > of the > timeline involved (i.e., are we talking about six months or two years > or...)? > > Thanks! > > Steve > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > > ---------------------------------------------------------------------------- > -- > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: Ricardo J. B. <ric...@da...> - 2011-03-09 18:20:42
|
El Martes 08 Marzo 2011, Thomas S Hatch escribió: > Keep in mind Steve that my solution, as presented in the mailing list is > not perfect, and I have found a number of bugs in it. > > I am currently working in a second generation system, which is much cleaner > and more robust. [ ... ] > I hope I can be helpful, and if you (or anyone else) are interested in > assisting with failover testing I must admit that I would be more motivated > to get what I have out, I will post as soon as I can get it all up to > github. I don't know if I'll have time soon to test your new system, but please let us know if you publish it even if it is unfinished, I might be able to give it a try at work (or at least a good look in non-work time!) Cheers, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Thomas S H. <tha...@gm...> - 2011-03-09 18:26:11
|
Awesome, my current "rush" project (https://github.com/thatch45/salt) is almost ready for primetime, so I will be able to push these "mfs-failover" components soon, I will try to squeeze in enough time to make an Arch Linux package and a tarball as well. I will let you all know when the sources are up. -Thomas S Hatch On Wed, Mar 9, 2011 at 11:20 AM, Ricardo J. Barberis < ric...@da...> wrote: > El Martes 08 Marzo 2011, Thomas S Hatch escribió: > > Keep in mind Steve that my solution, as presented in the mailing list is > > not perfect, and I have found a number of bugs in it. > > > > I am currently working in a second generation system, which is much > cleaner > > and more robust. > [ ... ] > > I hope I can be helpful, and if you (or anyone else) are interested in > > assisting with failover testing I must admit that I would be more > motivated > > to get what I have out, I will post as soon as I can get it all up to > > github. > > I don't know if I'll have time soon to test your new system, but please let > us > know if you publish it even if it is unfinished, I might be able to give it > a > try at work (or at least a good look in non-work time!) > > Cheers, > -- > Ricardo J. Barberis > Senior SysAdmin / ITI > Dattatec.com :: Soluciones de Web Hosting > Tu Hosting hecho Simple! > > ------------------------------------------ > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |