You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Yuri <yur...@gm...> - 2011-03-12 02:58:45
|
Hi Thomas: Sent again,cc to list Thanks very much.Actually,I also want to depart master and chunk ,make them separately,like "emerge mfsmaster","emerge mfschunkserver"...But in Gentoo , things become easier,we just need to use USE flag to control which program to compile.For example,we only need masterserver ,and then use USE="-mfschunkserver -mfsmetalogger" emerge mfs,we can get the same result.In my opoine, we can use USE flag to easier mantianing and keeping the version consistently. By the way,I use start-stop-daemon to run mfscgiserv,but the PID is wrong,so I haven't finished this init script yet. Yuri,12-Mar-2011 发件人: Thomas S Hatch [mailto:tha...@gm...] 发送时间: 2011年3月12日 0:28 收件人: Yuri 抄送: moo...@li... 主题: Re: [Moosefs-users] MFS for Gentoo Linux I will take a look at your ebuild, as a reference, usually the packages get named mfs-master mfs-chunkserver and mfs-client, but that is of course your decision. If you are interested I maintain the Arch Linux packages, and those are fairly similar to ebuilds, maybe how they are set up can help as a reference: http://projects.archlinux.org/svntogit/community.git/tree/mfs/trunk/ I will get back to you on your ebuild -Thomas S Hatch On Fri, Mar 11, 2011 at 9:18 AM, Yuri <yur...@gm...> wrote: Hi all: I have created Moosefs on Gentoo Linux ebuild,I hope someone can help me to test it. This ebuild defines 4 USE,including mfscgiserv,mfschunkserver,mfsmaster and mfsmount.I will maintain this ebuild afterwards,hope everyone can give me feedback and suggestion. You can use svn to check out this ebuild file: svn co svn://svn.yurifamily.cn/overlay/portage Yuri,Mar-11-2011 ---------------------------------------------------------------------------- -- Colocation vs. Managed Hosting A question and answer guide to determining the best fit for your organization - today and in the future. http://p.sf.net/sfu/internap-sfd2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Thomas S H. <tha...@gm...> - 2011-03-11 16:28:09
|
I will take a look at your ebuild, as a reference, usually the packages get named mfs-master mfs-chunkserver and mfs-client, but that is of course your decision. If you are interested I maintain the Arch Linux packages, and those are fairly similar to ebuilds, maybe how they are set up can help as a reference: http://projects.archlinux.org/svntogit/community.git/tree/mfs/trunk/ I will get back to you on your ebuild -Thomas S Hatch On Fri, Mar 11, 2011 at 9:18 AM, Yuri <yur...@gm...> wrote: > Hi all: > > > > I have created Moosefs on Gentoo Linux ebuild,I hope someone can help me to > test it. > > This ebuild defines 4 USE,including mfscgiserv,mfschunkserver,mfsmaster and > mfsmount.I will maintain this ebuild afterwards,hope everyone can give me > feedback and suggestion. > > You can use svn to check out this ebuild file: svn co svn:// > svn.yurifamily.cn/overlay/portage > > > > Yuri,Mar-11-2011 > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Yuri <yur...@gm...> - 2011-03-11 16:18:49
|
Hi all: I have created Moosefs on Gentoo Linux ebuild,I hope someone can help me to test it. This ebuild defines 4 USE,including mfscgiserv,mfschunkserver,mfsmaster and mfsmount.I will maintain this ebuild afterwards,hope everyone can give me feedback and suggestion. You can use svn to check out this ebuild file: svn co svn://svn.yurifamily.cn/overlay/portage Yuri,Mar-11-2011 |
From: Stéphane B. <ste...@ga...> - 2011-03-10 19:47:49
|
Hi, We have moosefs running 2 months now. And today we got few errors on it. The master has done the: structure check loop and it ended with 72 unavailable chunks and files. I don't know what this really means. The files are still accessible and mfscheckfile output is good too. We have about 2 millions files on the mooseFS currently and 2 chunkserver with a redundant setup. On this cluster we run the version: 1.6.17 we plan to upgrade it soon. Thanks, Stephane |
From: Thomas S H. <tha...@gm...> - 2011-03-10 04:44:38
|
I have simplified the ucarp, but set up a daemon that manages the metalogger because after a failover the metadata on the loggers was corrupt until there was a fresh pull from the master. I don't have a document yet, as you can see by my github, I have been very busy lately, but so far on my tests this setup has been flawless, even with rapid cascading failovers, downtime between failovers depend on the speed the drives storing the metadata, I have some ideas to speed this up, but life happens. I know that failover is in very high demand for MooseFS, if anyone wants to help with this patches and github pull requests are VERY welcome. The code is here: https://github.com/thatch45/mfs-failover Let me know what you all think! P.S. I have a few other projects that might be helpful for people, if you are interested take a look at salt, clay and varch, we use them extensively at Beyond Oblivion |
From: Robert D. <ro...@in...> - 2011-03-10 02:20:27
|
Hello, I have a few servers which are running mooseFS great, however, there is one that is continually having issues when running mfschunkserver. The process itself will run for approximately three hours, then drop connection to the master. When it drops connection the master, I can stop the process, start the process again, but when I restart the mfschunkserver process the system becomes unstable and hard-locks. The output I receive is the following from messages: (the last few lines are irrelevant, it just shows I was able to plug a usb keyboard in the system, tried restarting the mfschunkserver, then had to reboot) Mar 9 17:54:50 brickc kernel: [11017.971891] Pid: 1549, comm: mfschunkserver Tainted: G D 2.6.35-22-server #33-Ubuntu DH55HC/ Mar 9 17:54:50 brickc kernel: [11017.973678] RIP: 0010:[<ffffffff8110d241>] [<ffffffff8110d241>] page_evictable+0x21/0x80 Mar 9 17:54:50 brickc kernel: [11017.975470] RSP: 0018:ffff880378931438 EFLAGS: 00010286 Mar 9 17:54:50 brickc kernel: [11017.977256] RAX: fff688000668a848 RBX: ffffea0009ea1b88 RCX: ffffffffffffffd0 Mar 9 17:54:50 brickc kernel: [11017.979067] RDX: 020000000000080d RSI: 0000000000000000 RDI: ffffea0009ea1b88 Mar 9 17:54:50 brickc kernel: [11017.980849] RBP: ffff880378931438 R08: dead000000200200 R09: dead000000100100 Mar 9 17:54:50 brickc kernel: [11017.982637] R10: ffff8801000013d0 R11: 0000000000000000 R12: ffffea0009ea1bb0 Mar 9 17:54:50 brickc kernel: [11017.984417] R13: ffff880378931878 R14: ffff8803789316a8 R15: ffff8801000012d8 Mar 9 17:54:50 brickc kernel: [11017.986230] FS: 00007f7178d99700(0000) GS:ffff880001e20000(0000) knlGS:0000000000000000 Mar 9 17:54:50 brickc kernel: [11017.988035] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b Mar 9 17:54:50 brickc kernel: [11017.989887] CR2: 00007f7178123000 CR3: 0000000419f6d000 CR4: 00000000000006e0 Mar 9 17:54:50 brickc kernel: [11017.991715] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 9 17:54:50 brickc kernel: [11017.993577] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 9 17:54:50 brickc kernel: [11017.995448] Process mfschunkserver (pid: 1549, threadinfo ffff880378930000, task ffff8804169f16e0) Mar 9 17:54:50 brickc kernel: [11017.999197] ffff880378931548 ffffffff8110ed70 0000000000000000 ffff8803789314f8 Mar 9 17:54:50 brickc kernel: [11017.999229] <0> 0000000000000000 ffff88000668a848 0000000000000000 0000000000000000 Mar 9 17:54:50 brickc kernel: [11018.001143] <0> 0000000000000000 0000000000000001 ffff8803789314a8 ffffffff81149b01 Mar 9 17:54:50 brickc kernel: [11018.006906] [<ffffffff8110ed70>] shrink_page_list+0x100/0x580 Mar 9 17:54:50 brickc kernel: [11018.008839] [<ffffffff81149b01>] ? mem_cgroup_del_lru_list+0x21/0xa0 Mar 9 17:54:50 brickc kernel: [11018.010790] [<ffffffff81149c09>] ? mem_cgroup_del_lru+0x39/0x40 Mar 9 17:54:50 brickc kernel: [11018.012724] [<ffffffff8110d98b>] ? isolate_lru_pages+0xdb/0x260 Mar 9 17:54:50 brickc kernel: [11018.014637] [<ffffffff8110f4bd>] shrink_inactive_list+0x2cd/0x7f0 Mar 9 17:54:50 brickc kernel: [11018.016550] [<ffffffff81107337>] ? __alloc_pages_slowpath+0x1a7/0x590 Mar 9 17:54:50 brickc kernel: [11018.018462] [<ffffffff8110d772>] ? get_scan_count+0x172/0x2b0 Mar 9 17:54:50 brickc kernel: [11018.020328] [<ffffffff8110fb8b>] shrink_zone+0x1ab/0x230 Mar 9 17:54:50 brickc kernel: [11018.022159] [<ffffffff8110fc93>] shrink_zones+0x83/0x130 Mar 9 17:54:50 brickc kernel: [11018.023986] [<ffffffff8110fdde>] do_try_to_free_pages+0x9e/0x360 Mar 9 17:54:50 brickc kernel: [11018.025811] [<ffffffff8111024b>] try_to_free_pages+0x6b/0x70 Mar 9 17:54:50 brickc kernel: [11018.027612] [<ffffffff8110740a>] __alloc_pages_slowpath+0x27a/0x590 Mar 9 17:54:50 brickc kernel: [11018.029414] [<ffffffff8122b77a>] ? __jbd2_log_space_left+0x1a/0x40 Mar 9 17:54:50 brickc kernel: [11018.031220] [<ffffffff81107884>] __alloc_pages_nodemask+0x164/0x1d0 Mar 9 17:54:50 brickc kernel: [11018.033020] [<ffffffff811397ba>] alloc_pages_current+0x9a/0x100 Mar 9 17:54:50 brickc kernel: [11018.034801] [<ffffffff81100da7>] __page_cache_alloc+0x87/0x90 Mar 9 17:54:50 brickc kernel: [11018.036600] [<ffffffff8110215c>] grab_cache_page_write_begin+0x7c/0xc0 Mar 9 17:54:50 brickc kernel: [11018.038474] [<ffffffff811f1964>] ext4_da_write_begin+0x144/0x290 Mar 9 17:54:50 brickc kernel: [11018.040825] [<ffffffff811f21ed>] ? ext4_da_write_end+0xfd/0x2e0 Mar 9 17:54:50 brickc kernel: [11018.043165] [<ffffffff8104ee12>] ? enqueue_entity+0x132/0x1b0 Mar 9 17:54:50 brickc kernel: [11018.045448] [<ffffffff810ffb36>] ? iov_iter_copy_from_user_atomic+0x96/0x170 Mar 9 17:54:50 brickc kernel: [11018.047769] [<ffffffff810ffe62>] generic_perform_write+0xc2/0x1d0 Mar 9 17:54:50 brickc kernel: [11018.049582] [<ffffffff810fffd4>] generic_file_buffered_write+0x64/0xa0 Mar 9 17:54:50 brickc kernel: [11018.051321] [<ffffffff811028e0>] __generic_file_aio_write+0x240/0x470 Mar 9 17:54:50 brickc kernel: [11018.053052] [<ffffffff810901ed>] ? futex_wait_queue_me+0xcd/0x110 Mar 9 17:54:50 brickc kernel: [11018.054751] [<ffffffff81102b75>] generic_file_aio_write+0x65/0xd0 Mar 9 17:54:50 brickc kernel: [11018.056442] [<ffffffff811e77a9>] ext4_file_write+0x39/0xb0 Mar 9 17:54:50 brickc kernel: [11018.058204] [<ffffffff81152bea>] do_sync_write+0xda/0x120 Mar 9 17:54:50 brickc kernel: [11018.059901] [<ffffffff8159e76e>] ? _raw_spin_lock+0xe/0x20 Mar 9 17:54:50 brickc kernel: [11018.061568] [<ffffffff81090a62>] ? futex_wake+0x112/0x130 Mar 9 17:54:50 brickc kernel: [11018.063232] [<ffffffff8128f208>] ? apparmor_file_permission+0x18/0x20 Mar 9 17:54:50 brickc kernel: [11018.064900] [<ffffffff8125e7a6>] ? security_file_permission+0x16/0x20 Mar 9 17:54:50 brickc kernel: [11018.066565] [<ffffffff81152ec8>] vfs_write+0xb8/0x1a0 Mar 9 17:54:50 brickc kernel: [11018.068234] [<ffffffff81153862>] sys_pwrite64+0x82/0xa0 Mar 9 17:54:50 brickc kernel: [11018.069913] [<ffffffff8100a0f2>] system_call_fastpath+0x16/0x1b Mar 9 17:54:50 brickc kernel: [11018.077405] RSP <ffff880378931438> Mar 9 17:54:50 brickc kernel: [11018.079482] ---[ end trace 5c000d67753ebd63 ]--- Mar 9 17:59:04 brickc kernel: [11271.114370] usb 2-1.3: new low speed USB device using ehci_hcd and address 4 Mar 9 17:59:04 brickc kernel: [11271.253770] input: USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/input/input5 Mar 9 17:59:04 brickc kernel: [11271.259128] generic-usb 0003:04D9:1603.0003: input,hidraw0: USB HID v1.10 Keyboard [ USB Keyboard] on usb-0000:00:1d.0-1.3/input0 Mar 9 17:59:04 brickc kernel: [11271.279819] input: USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.1/input/input6 Mar 9 17:59:04 brickc kernel: [11271.284307] generic-usb 0003:04D9:1603.0004: input,hidraw1: USB HID v1.10 Device [ USB Keyboard] on usb-0000:00:1d.0-1.3/input1 Mar 9 18:06:20 brickc kernel: imklog 4.2.0, log source = /proc/kmsg started. Ideas for debugging this issue? |
From: Thomas S H. <tha...@gm...> - 2011-03-09 18:26:11
|
Awesome, my current "rush" project (https://github.com/thatch45/salt) is almost ready for primetime, so I will be able to push these "mfs-failover" components soon, I will try to squeeze in enough time to make an Arch Linux package and a tarball as well. I will let you all know when the sources are up. -Thomas S Hatch On Wed, Mar 9, 2011 at 11:20 AM, Ricardo J. Barberis < ric...@da...> wrote: > El Martes 08 Marzo 2011, Thomas S Hatch escribió: > > Keep in mind Steve that my solution, as presented in the mailing list is > > not perfect, and I have found a number of bugs in it. > > > > I am currently working in a second generation system, which is much > cleaner > > and more robust. > [ ... ] > > I hope I can be helpful, and if you (or anyone else) are interested in > > assisting with failover testing I must admit that I would be more > motivated > > to get what I have out, I will post as soon as I can get it all up to > > github. > > I don't know if I'll have time soon to test your new system, but please let > us > know if you publish it even if it is unfinished, I might be able to give it > a > try at work (or at least a good look in non-work time!) > > Cheers, > -- > Ricardo J. Barberis > Senior SysAdmin / ITI > Dattatec.com :: Soluciones de Web Hosting > Tu Hosting hecho Simple! > > ------------------------------------------ > > > ------------------------------------------------------------------------------ > Colocation vs. Managed Hosting > A question and answer guide to determining the best fit > for your organization - today and in the future. > http://p.sf.net/sfu/internap-sfd2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Ricardo J. B. <ric...@da...> - 2011-03-09 18:20:42
|
El Martes 08 Marzo 2011, Thomas S Hatch escribió: > Keep in mind Steve that my solution, as presented in the mailing list is > not perfect, and I have found a number of bugs in it. > > I am currently working in a second generation system, which is much cleaner > and more robust. [ ... ] > I hope I can be helpful, and if you (or anyone else) are interested in > assisting with failover testing I must admit that I would be more motivated > to get what I have out, I will post as soon as I can get it all up to > github. I don't know if I'll have time soon to test your new system, but please let us know if you publish it even if it is unfinished, I might be able to give it a try at work (or at least a good look in non-work time!) Cheers, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Steve W. <st...@pu...> - 2011-03-08 15:37:56
|
Thanks for your responses, Michal and Thomas. And I look forward to seeing the second generation of your failover solution, Thomas. Regards, Steve On 03/08/2011 10:19 AM, Thomas S Hatch wrote: > Keep in mind Steve that my solution, as presented in the mailing list > is not perfect, and I have found a number of bugs in it. > > I am currently working in a second generation system, which is much > cleaner and more robust. > > Once I have the new system tested to a level in which I am satisfied > that it is a truly viable option I will be publishing it on github, > unfortunately the hardware I have been using for testing the failover > system has been unavailable to me for testing, and my new test > hardware is still a month away. > > But, since this is obviously such a big deal to so many people, I will > try to get the code that I have up on github as quickly as possible. I > have written a small daemon in python that fixes many of the problems > in migration, and my tests so far have been flawless(which is why I > need to run more tests, I can't imagine that it is really working as > well as I think it is). > > This is probably the only real shortcoming of MooseFS, but I still > think that MooseFS is the best option available for distributed > filesystems today, and while Ceph is a very promising project, our > tests have concluded that it is still a ways out from being production > ready. Also, if you have seen any of my previous posts on the matter, > I am not a big fan of GlusterFS, we experienced rampant... RAMPANT > data corruption, and the reliability was far below sub standard. > > I hope I can be helpful, and if you (or anyone else) are interested in > assisting with failover testing I must admit that I would be more > motivated to get what I have out, I will post as soon as I can get it > all up to github. > > -Thomas S Hatch > -Systems Engineer, Beyond Oblivion > > On Tue, Mar 8, 2011 at 2:59 AM, Michal Borychowski > <mic...@ge... <mailto:mic...@ge...>> > wrote: > > Hi > > As you know a redundant master functionality is right now not a > built-in > functionality. This subject is for us very crucial because we know how > important this is and we receive lots of requests about it from many > sources. Unfortunately I cannot give a date when it would be > ready. 6 months > is not rather possible. > > We've been using MooseFS for more than 5 years and it appears that > in real > it is not that big problem. For the moment we recommend using > solutions with > eg. ucarp. > > Please read this tutorial from Thomas: > http://sourceforge.net/mailarchive/message.php?msg_id=26804911 > > > Kind regards > Michal > > > > -----Original Message----- > From: Steve Wilson [mailto:st...@pu... > <mailto:st...@pu...>] > Sent: Monday, March 07, 2011 4:43 PM > To: moo...@li... > <mailto:moo...@li...> > Subject: [Moosefs-users] Question re: master server failover > > Hi, > > I understand that there's some consideration being given to > implementing > a redundant master server for failover purposes. I've not seen this > show up yet on the MooseFS roadmap, though. Are there any plans for > building failover into MooseFS and, if so, is there a rough idea > of the > timeline involved (i.e., are we talking about six months or two years > or...)? > > Thanks! > > Steve > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > > ---------------------------------------------------------------------------- > -- > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: Thomas S H. <tha...@gm...> - 2011-03-08 15:19:17
|
Keep in mind Steve that my solution, as presented in the mailing list is not perfect, and I have found a number of bugs in it. I am currently working in a second generation system, which is much cleaner and more robust. Once I have the new system tested to a level in which I am satisfied that it is a truly viable option I will be publishing it on github, unfortunately the hardware I have been using for testing the failover system has been unavailable to me for testing, and my new test hardware is still a month away. But, since this is obviously such a big deal to so many people, I will try to get the code that I have up on github as quickly as possible. I have written a small daemon in python that fixes many of the problems in migration, and my tests so far have been flawless(which is why I need to run more tests, I can't imagine that it is really working as well as I think it is). This is probably the only real shortcoming of MooseFS, but I still think that MooseFS is the best option available for distributed filesystems today, and while Ceph is a very promising project, our tests have concluded that it is still a ways out from being production ready. Also, if you have seen any of my previous posts on the matter, I am not a big fan of GlusterFS, we experienced rampant... RAMPANT data corruption, and the reliability was far below sub standard. I hope I can be helpful, and if you (or anyone else) are interested in assisting with failover testing I must admit that I would be more motivated to get what I have out, I will post as soon as I can get it all up to github. -Thomas S Hatch -Systems Engineer, Beyond Oblivion On Tue, Mar 8, 2011 at 2:59 AM, Michal Borychowski < mic...@ge...> wrote: > Hi > > As you know a redundant master functionality is right now not a built-in > functionality. This subject is for us very crucial because we know how > important this is and we receive lots of requests about it from many > sources. Unfortunately I cannot give a date when it would be ready. 6 > months > is not rather possible. > > We've been using MooseFS for more than 5 years and it appears that in real > it is not that big problem. For the moment we recommend using solutions > with > eg. ucarp. > > Please read this tutorial from Thomas: > http://sourceforge.net/mailarchive/message.php?msg_id=26804911 > > > Kind regards > Michal > > > > -----Original Message----- > From: Steve Wilson [mailto:st...@pu...] > Sent: Monday, March 07, 2011 4:43 PM > To: moo...@li... > Subject: [Moosefs-users] Question re: master server failover > > Hi, > > I understand that there's some consideration being given to implementing > a redundant master server for failover purposes. I've not seen this > show up yet on the MooseFS roadmap, though. Are there any plans for > building failover into MooseFS and, if so, is there a rough idea of the > timeline involved (i.e., are we talking about six months or two years > or...)? > > Thanks! > > Steve > > -- > Steven M. Wilson, Systems and Network Manager > Markey Center for Structural Biology > Purdue University > (765) 496-1946 > > > > ---------------------------------------------------------------------------- > -- > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Thomas S H. <tha...@gm...> - 2011-03-08 15:03:41
|
Thanks Anh, I should have specified that we had already made those changes to the configs But I had no idea that you could just copy the chunks over, but now that I think about the architecture of MooseFS, that makes perfect sense! -Thanks 2011/3/8 Michal Borychowski <mic...@ge...> > Hi! > > > > Absolutely the fastest way would be (for each chunkserver): > > 1. Stop the chunkserver X > > 2. Copy (just using ‘cp’) all files from the old disks to the new > ones > > 3. Run the chunkserver X using the new disks (of course the > chunkserver may now run on a new mainboard, etc.) > > > > The standard method would be connecting new CSs along with the old ones > (perfectly all in the same moment) and marking all the old disks for > removal. Then you have to wait until system migrates chunks to the new > servers. But it would take several days. > > > > > > Kind regards > > Michal > > > > > > *From:* Thomas S Hatch [mailto:tha...@gm...] > *Sent:* Friday, March 04, 2011 6:43 PM > *To:* moosefs-users > *Subject:* [Moosefs-users] speed up balancing > > > > I am currently migrating my moosefs environment into a new setup, new OS, > new partitioning, changing out disks for larger disks etc. > > > > So I need to be able to take a chunkserver down, delete the chunks on it, > and then have it rebalance as quickly as possible back into the overall > moosefs cluster, what is the fastest way to do this? Should I make the > chunks for removal before taking them offline? Will that speed up the > balancing? > > > > -Thomas S Hatch > |
From: Michal B. <mic...@ge...> - 2011-03-08 10:56:01
|
Hi Robert! We'll give it a closer look, for the moment you can try to add in Makefile compilator option: "-D_POSIX_PTHREAD_SEMANTICS" It helps in Solaris 10, and should help in Nexenta too. Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 7:58 PM To: moo...@li... Subject: [Moosefs-users] Compile on Nexenta Fails Hello, When compiling moosefs on NexentaStor 3.0.4(Community), I use the following configure options: ./configure --disable-mfsmaster --disable-mfscgi --disable-mfscgiserv --disable-mfsmount After configuring, make returns the following output: Making all in mfschunkserver make[2]: Entering directory `/root/mfs-1.6.20-2/mfschunkserver' gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-bgjobs.o -MD -MP -MF .deps/mfschunkserver-bgjobs.Tpo -c -o mfschunkserver-bgjobs.o `test -f 'bgjobs.c' || echo './'`bgjobs.c mv -f .deps/mfschunkserver-bgjobs.Tpo .deps/mfschunkserver-bgjobs.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-csserv.o -MD -MP -MF .deps/mfschunkserver-csserv.Tpo -c -o mfschunkserver-csserv.o `test -f 'csserv.c' || echo './'`csserv.c mv -f .deps/mfschunkserver-csserv.Tpo .deps/mfschunkserver-csserv.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-hddspacemgr.o -MD -MP -MF .deps/mfschunkserver-hddspacemgr.Tpo -c -o mfschunkserver-hddspacemgr.o `test -f 'hddspacemgr.c' || echo './'`hddspacemgr.c hddspacemgr.c: In function 'hdd_chunk_remove': hddspacemgr.c:667: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:675: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_chunk_get': hddspacemgr.c:802: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:810: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_check_folders': hddspacemgr.c:1070: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:1078: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_emptycrc': hddspacemgr.c:1290: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'chunk_readcrc': hddspacemgr.c:1328: warning: pointer targets in assignment differ in signedness hddspacemgr.c:1343: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_freecrc': hddspacemgr.c:1355: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_delayed_ops': hddspacemgr.c:1505: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_io_begin': hddspacemgr.c:1608: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'hdd_term': hddspacemgr.c:3669: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:3677: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness mv -f .deps/mfschunkserver-hddspacemgr.Tpo .deps/mfschunkserver-hddspacemgr.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-masterconn.o -MD -MP -MF .deps/mfschunkserver-masterconn.Tpo -c -o mfschunkserver-masterconn.o `test -f 'masterconn.c' || echo './'`masterconn.c mv -f .deps/mfschunkserver-masterconn.Tpo .deps/mfschunkserver-masterconn.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-replicator.o -MD -MP -MF .deps/mfschunkserver-replicator.Tpo -c -o mfschunkserver-replicator.o `test -f 'replicator.c' || echo './'`replicator.c mv -f .deps/mfschunkserver-replicator.Tpo .deps/mfschunkserver-replicator.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-chartsdata.o -MD -MP -MF .deps/mfschunkserver-chartsdata.Tpo -c -o mfschunkserver-chartsdata.o `test -f 'chartsdata.c' || echo './'`chartsdata.c mv -f .deps/mfschunkserver-chartsdata.Tpo .deps/mfschunkserver-chartsdata.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-main.o -MD -MP -MF .deps/mfschunkserver-main.Tpo -c -o mfschunkserver-main.o `test -f '../mfscommon/main.c' || echo './'`../mfscommon/main.c ../mfscommon/main.c: In function 'changeugid': ../mfscommon/main.c:548: error: too many arguments to function 'getgrnam_r' ../mfscommon/main.c:561: error: too many arguments to function 'getpwuid_r' ../mfscommon/main.c:569: error: too many arguments to function 'getpwnam_r' make[2]: *** [mfschunkserver-main.o] Error 1 make[2]: Leaving directory `/root/mfs-1.6.20-2/mfschunkserver' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/mfs-1.6.20-2' make: *** [all] Error 2 Ideas/Suggestions/Work-Around? |
From: Michal B. <mic...@ge...> - 2011-03-08 10:35:39
|
Hi It may happen when system receives lots of connections which are quickly closed. We are working on "persistent connections" (dig in the archive of the group) and it would help. Kind regards Michal -----Original Message----- From: Giovanni Toraldo [mailto:gt...@li...] Sent: Tuesday, March 08, 2011 9:43 AM To: moo...@li... Subject: [Moosefs-users] TCP: time wait bucket table overflow (CT0) Hi, I'm getting a lot of errors on both chunkserver dmesg: TCP: time wait bucket table overflow (CT0) TCP: time wait bucket table overflow (CT0) TCP: time wait bucket table overflow (CT0) TCP: time wait bucket table overflow (CT0) Googling around, seems a problem of exhaustion of available sockets and a good workaround seems to enable tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle TCP_WAIT connection dropped from 4859 sockets to near 0 in the next few minutes. Anyone got this issue with latest MooseFS under Debian Squeeze? Thanks. -- Giovanni Toraldo http://www.libersoft.it/ ---------------------------------------------------------------------------- -- What You Don't Know About Data Connectivity CAN Hurt You This paper provides an overview of data connectivity, details its effect on application quality, and explores various alternative solutions. http://p.sf.net/sfu/progress-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-03-08 10:08:12
|
Hi! Absolutely the fastest way would be (for each chunkserver): 1. Stop the chunkserver X 2. Copy (just using 'cp') all files from the old disks to the new ones 3. Run the chunkserver X using the new disks (of course the chunkserver may now run on a new mainboard, etc.) The standard method would be connecting new CSs along with the old ones (perfectly all in the same moment) and marking all the old disks for removal. Then you have to wait until system migrates chunks to the new servers. But it would take several days. Kind regards Michal From: Thomas S Hatch [mailto:tha...@gm...] Sent: Friday, March 04, 2011 6:43 PM To: moosefs-users Subject: [Moosefs-users] speed up balancing I am currently migrating my moosefs environment into a new setup, new OS, new partitioning, changing out disks for larger disks etc. So I need to be able to take a chunkserver down, delete the chunks on it, and then have it rebalance as quickly as possible back into the overall moosefs cluster, what is the fastest way to do this? Should I make the chunks for removal before taking them offline? Will that speed up the balancing? -Thomas S Hatch |
From: Michal B. <mic...@ge...> - 2011-03-08 09:59:45
|
Hi As you know a redundant master functionality is right now not a built-in functionality. This subject is for us very crucial because we know how important this is and we receive lots of requests about it from many sources. Unfortunately I cannot give a date when it would be ready. 6 months is not rather possible. We've been using MooseFS for more than 5 years and it appears that in real it is not that big problem. For the moment we recommend using solutions with eg. ucarp. Please read this tutorial from Thomas: http://sourceforge.net/mailarchive/message.php?msg_id=26804911 Kind regards Michal -----Original Message----- From: Steve Wilson [mailto:st...@pu...] Sent: Monday, March 07, 2011 4:43 PM To: moo...@li... Subject: [Moosefs-users] Question re: master server failover Hi, I understand that there's some consideration being given to implementing a redundant master server for failover purposes. I've not seen this show up yet on the MooseFS roadmap, though. Are there any plans for building failover into MooseFS and, if so, is there a rough idea of the timeline involved (i.e., are we talking about six months or two years or...)? Thanks! Steve -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 ---------------------------------------------------------------------------- -- What You Don't Know About Data Connectivity CAN Hurt You This paper provides an overview of data connectivity, details its effect on application quality, and explores various alternative solutions. http://p.sf.net/sfu/progress-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Anh K. H. <ky...@vi...> - 2011-03-08 09:33:49
|
On Tue, 8 Mar 2011 16:21:43 +0700 "Anh K. Huynh" <ky...@vi...> wrote: > ... If number of chunks is greater than > your maximum goal, it's safe to shut down the chunk server that you > want to replace (IMHO). My bad. Even if the number of chunks is greater than the maximum goal, we have to ensure that there isn't any file that's under goal before shutting down any chunk server. This note comes from FAQ AFAIK -- Anh Ky Huynh @ UTC+7 Registered Linux User #392115 |
From: Anh K. H. <ky...@vi...> - 2011-03-08 09:22:06
|
On Fri, 4 Mar 2011 10:42:56 -0700 Thomas S Hatch <tha...@gm...> wrote: > I am currently migrating my moosefs environment into a new setup, > new OS, new partitioning, changing out disks for larger disks etc. > > So I need to be able to take a chunkserver down, delete the chunks > on it, and then have it rebalance as quickly as possible back into > the overall moosefs cluster, what is the fastest way to do this? > Should I make the chunks for removal before taking them offline? > Will that speed up the balancing? I think the safest way is to mark all disks on the chunk as *obsolete*, and increase the values of "CHUNKS_WRITE_REP_LIMIT" and "CHUNKS_READ_REP_LIMIT" in the file "mfsmaster.cfg" (the master and the chunk are required to restart; the number depends on your network and your file system performance.) My practice: I have two chunk servers and goal 2 for all files. I replaced a chunk by shutting it down. It's unsafe, but it works for me as my files are local :) If number of chunks is greater than your maximum goal, it's safe to shut down the chunk server that you want to replace (IMHO). Regards, -- Anh Ky Huynh @ UTC+7 Registered Linux User #392115 |
From: Steve <st...@bo...> - 2011-03-08 09:09:52
|
Hi, Can you not pop down to your local recycle center ? For most applications almost any hardware would do for dedicated chunk servers unless your doing some other computations in them. Disk drives are now relatively cheap and even if the box doesn't support sata a simple sata card can be obtained for peanuts. Steve -------Original Message------- From: Randall Date: 03/08/11 06:46:46 To: moo...@li... Subject: Re: [Moosefs-users] Chunks On 03/03/2011 11:06 PM, Ricardo J. Barberis wrote: > El Jue 03 Marzo 2011, Steve escribió: >> In your first scenario why wouldn't you just mirror the drives ? > > With only one chunkserver? Yes. thanks, understand that if you have a single server setup just normail Raid (10) would be the better approach in this case. but I need some intermediate solution, working on a cash/space/equipment strapped hobby project where we start out with 1 server and we are bound to to end up with more chunk servers eventually. it is not ideal but having goal=2 using a single server (temporarily) saves the hassle of rebuilding the architecture of the setup in a later stage and is at least better than goal=1 anyway, in case someone might be interested, it is doable using virtualisation (linux VServer) using 2 chunk servers as a virtual machine with its own IP and mounting half the amount of disks in each. Again, this does not save you from machine failure and is not the best/most efficient mirroring solution in itself but it does save you somewhat from single disk failure. > >> -------Original Message------- >> >> From: Randall >> Date: 03/03/2011 10:14:00 >> To: moo...@li... >> Subject: Re: [Moosefs-users] Chunks >> >> On 03/02/2011 11:40 AM, Michal Borychowski wrote: >>> The main purpose of MooseFS system is security not the space savings. And >>> solution with RAID6 is not that secure. We generally advise not to use >>> any RAIDs and using at least goal=2. >> >> Wondering about this. >> >> when no raid and goal=2, this would mean when using multiple disks per >> server that each disk would be a separate chunk location. >> >> Can see the use of this when you use 1 single server as each copy would >> reside on 2 seperate disks so you are somewhat protected against disk >> failure. >> >> but when you have 2 servers with each 12 disks (24 chunk locations), >> does each chunk reside on 2 separate servers giving protection against >> server failure? >> >> did read somewhere there is work done on "location awareness" spreading >> each chunks over racks >> >> Randall > ----------------------------------------------------------------------------- What You Don't Know About Data Connectivity CAN Hurt You This paper provides an overview of data connectivity, details its effect on application quality, and explores various alternative solutions. http://p.sf.net/sfu/progress-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Giovanni T. <gt...@li...> - 2011-03-08 08:42:51
|
Hi, I'm getting a lot of errors on both chunkserver dmesg: TCP: time wait bucket table overflow (CT0) TCP: time wait bucket table overflow (CT0) TCP: time wait bucket table overflow (CT0) TCP: time wait bucket table overflow (CT0) Googling around, seems a problem of exhaustion of available sockets and a good workaround seems to enable tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle TCP_WAIT connection dropped from 4859 sockets to near 0 in the next few minutes. Anyone got this issue with latest MooseFS under Debian Squeeze? Thanks. -- Giovanni Toraldo http://www.libersoft.it/ |
From: randall <ra...@so...> - 2011-03-08 06:46:11
|
On 03/03/2011 11:06 PM, Ricardo J. Barberis wrote: > El Jue 03 Marzo 2011, Steve escribió: >> In your first scenario why wouldn't you just mirror the drives ? > > With only one chunkserver? Yes. thanks, understand that if you have a single server setup just normail Raid (10) would be the better approach in this case. but i need some intermediate solution, working on a cash/space/equipment strapped hobby project where we start out with 1 server and we are bound to to end up with more chunk servers eventually. it is not ideal but having goal=2 using a single server (temporarily) saves the hassle of rebuilding the architecture of the setup in a later stage and is at least better than goal=1 anyway, in case someone might be interested, it is doable using virtualisation (linux VServer) using 2 chunk servers as a virtual machine with its own IP and mounting half the amount of disks in each. Again, this does not save you from machine failure and is not the best/most efficient mirroring solution in itself but it does save you somewhat from single disk failure. > >> -------Original Message------- >> >> From: randall >> Date: 03/03/2011 10:14:00 >> To: moo...@li... >> Subject: Re: [Moosefs-users] Chunks >> >> On 03/02/2011 11:40 AM, Michal Borychowski wrote: >>> The main purpose of MooseFS system is security not the space savings. And >>> solution with RAID6 is not that secure. We generally advise not to use >>> any RAIDs and using at least goal=2. >> >> Wondering about this. >> >> when no raid and goal=2, this would mean when using multiple disks per >> server that each disk would be a seperate chunk location. >> >> Can see the use of this when you use 1 single server as each copy would >> reside on 2 seperate disks so you are somewhat protected against disk >> failure. >> >> but when you have 2 servers with each 12 disks (24 chunk locations), >> does each chunk reside on 2 seperate servers giving protection against >> server failure? >> >> did read somewhere there is work done on "location awareness" spreading >> each chunks over racks >> >> Randall > |
From: Steve W. <st...@pu...> - 2011-03-07 15:42:42
|
Hi, I understand that there's some consideration being given to implementing a redundant master server for failover purposes. I've not seen this show up yet on the MooseFS roadmap, though. Are there any plans for building failover into MooseFS and, if so, is there a rough idea of the timeline involved (i.e., are we talking about six months or two years or...)? Thanks! Steve -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: youngcow <you...@gm...> - 2011-03-07 03:42:48
|
storage snapshot don't copy all data to another place. > I have saw the web about snapshot > > but I want to know how does it work? > > (Does it a fully copy or copy the different place) > > and how the storage the snapshot uses? > (when i do the snapshot , i use 'df' to see the storage ,i found it > take the size equal to original size,but only take a while , does it > really write into the disk) > > does anyone know that ? > > thank you so much!! > > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: mung ru t. <vt...@gm...> - 2011-03-07 03:26:31
|
I have saw the web about snapshot but I want to know how does it work? (Does it a fully copy or copy the different place) and how the storage the snapshot uses? (when i do the snapshot , i use 'df' to see the storage ,i found it take the size equal to original size,but only take a while , does it really write into the disk) does anyone know that ? thank you so much!! |
From: Robert D. <ro...@in...> - 2011-03-04 19:58:25
|
Spoke too soon: (gdb) run -d Starting program: /usr/local/sbin/mfschunkserver -d [New LWP 1] [New LWP 2] [LWP 2 exited] [New LWP 2] warning: Lowest section in /lib/libpthread.so.1 is .dynamic at 00000074 Program received signal SIGSEGV, Segmentation fault. changeugid () at ../mfscommon/main.c:553 553 wrk_gid = gr->gr_gid; (gdb) _____ From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 11:48 AM To: moo...@li... Subject: Re: [Moosefs-users] Compile on Nexenta Fails Looking a little further into the code: Unix Man Pages: getgrnam_r(const char *name, struct group *grp, char *buffer, int bufsize); Linux Man Pages: getgrnam_r(const char *name, struct group *gbuf, char *buf, size_t buflen, struct group **gbufp); I changed: getgrnam_r(wgroup,&grp,pwdgrpbuff,16384,&gr); to getgrnam_r(wgroup,&grp,pwdgrpbuff,16384); The compile works fine, but not sure if this will hold. _____ From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 10:58 AM To: moo...@li... Subject: [Moosefs-users] Compile on Nexenta Fails Hello, When compiling moosefs on NexentaStor 3.0.4(Community), I use the following configure options: ./configure --disable-mfsmaster --disable-mfscgi --disable-mfscgiserv --disable-mfsmount After configuring, make returns the following output: Making all in mfschunkserver make[2]: Entering directory `/root/mfs-1.6.20-2/mfschunkserver' gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-bgjobs.o -MD -MP -MF .deps/mfschunkserver-bgjobs.Tpo -c -o mfschunkserver-bgjobs.o `test -f 'bgjobs.c' || echo './'`bgjobs.c mv -f .deps/mfschunkserver-bgjobs.Tpo .deps/mfschunkserver-bgjobs.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-csserv.o -MD -MP -MF .deps/mfschunkserver-csserv.Tpo -c -o mfschunkserver-csserv.o `test -f 'csserv.c' || echo './'`csserv.c mv -f .deps/mfschunkserver-csserv.Tpo .deps/mfschunkserver-csserv.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-hddspacemgr.o -MD -MP -MF .deps/mfschunkserver-hddspacemgr.Tpo -c -o mfschunkserver-hddspacemgr.o `test -f 'hddspacemgr.c' || echo './'`hddspacemgr.c hddspacemgr.c: In function 'hdd_chunk_remove': hddspacemgr.c:667: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:675: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_chunk_get': hddspacemgr.c:802: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:810: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_check_folders': hddspacemgr.c:1070: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:1078: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_emptycrc': hddspacemgr.c:1290: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'chunk_readcrc': hddspacemgr.c:1328: warning: pointer targets in assignment differ in signedness hddspacemgr.c:1343: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_freecrc': hddspacemgr.c:1355: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_delayed_ops': hddspacemgr.c:1505: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_io_begin': hddspacemgr.c:1608: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'hdd_term': hddspacemgr.c:3669: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:3677: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness mv -f .deps/mfschunkserver-hddspacemgr.Tpo .deps/mfschunkserver-hddspacemgr.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-masterconn.o -MD -MP -MF .deps/mfschunkserver-masterconn.Tpo -c -o mfschunkserver-masterconn.o `test -f 'masterconn.c' || echo './'`masterconn.c mv -f .deps/mfschunkserver-masterconn.Tpo .deps/mfschunkserver-masterconn.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-replicator.o -MD -MP -MF .deps/mfschunkserver-replicator.Tpo -c -o mfschunkserver-replicator.o `test -f 'replicator.c' || echo './'`replicator.c mv -f .deps/mfschunkserver-replicator.Tpo .deps/mfschunkserver-replicator.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-chartsdata.o -MD -MP -MF .deps/mfschunkserver-chartsdata.Tpo -c -o mfschunkserver-chartsdata.o `test -f 'chartsdata.c' || echo './'`chartsdata.c mv -f .deps/mfschunkserver-chartsdata.Tpo .deps/mfschunkserver-chartsdata.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-main.o -MD -MP -MF .deps/mfschunkserver-main.Tpo -c -o mfschunkserver-main.o `test -f '../mfscommon/main.c' || echo './'`../mfscommon/main.c ../mfscommon/main.c: In function 'changeugid': ../mfscommon/main.c:548: error: too many arguments to function 'getgrnam_r' ../mfscommon/main.c:561: error: too many arguments to function 'getpwuid_r' ../mfscommon/main.c:569: error: too many arguments to function 'getpwnam_r' make[2]: *** [mfschunkserver-main.o] Error 1 make[2]: Leaving directory `/root/mfs-1.6.20-2/mfschunkserver' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/mfs-1.6.20-2' make: *** [all] Error 2 Ideas/Suggestions/Work-Around? |
From: Robert D. <ro...@in...> - 2011-03-04 19:47:45
|
Looking a little further into the code: Unix Man Pages: getgrnam_r(const char *name, struct group *grp, char *buffer, int bufsize); Linux Man Pages: getgrnam_r(const char *name, struct group *gbuf, char *buf, size_t buflen, struct group **gbufp); I changed: getgrnam_r(wgroup,&grp,pwdgrpbuff,16384,&gr); to getgrnam_r(wgroup,&grp,pwdgrpbuff,16384); The compile works fine, but not sure if this will hold. _____ From: Robert Dye [mailto:ro...@in...] Sent: Friday, March 04, 2011 10:58 AM To: moo...@li... Subject: [Moosefs-users] Compile on Nexenta Fails Hello, When compiling moosefs on NexentaStor 3.0.4(Community), I use the following configure options: ./configure --disable-mfsmaster --disable-mfscgi --disable-mfscgiserv --disable-mfsmount After configuring, make returns the following output: Making all in mfschunkserver make[2]: Entering directory `/root/mfs-1.6.20-2/mfschunkserver' gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-bgjobs.o -MD -MP -MF .deps/mfschunkserver-bgjobs.Tpo -c -o mfschunkserver-bgjobs.o `test -f 'bgjobs.c' || echo './'`bgjobs.c mv -f .deps/mfschunkserver-bgjobs.Tpo .deps/mfschunkserver-bgjobs.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-csserv.o -MD -MP -MF .deps/mfschunkserver-csserv.Tpo -c -o mfschunkserver-csserv.o `test -f 'csserv.c' || echo './'`csserv.c mv -f .deps/mfschunkserver-csserv.Tpo .deps/mfschunkserver-csserv.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-hddspacemgr.o -MD -MP -MF .deps/mfschunkserver-hddspacemgr.Tpo -c -o mfschunkserver-hddspacemgr.o `test -f 'hddspacemgr.c' || echo './'`hddspacemgr.c hddspacemgr.c: In function 'hdd_chunk_remove': hddspacemgr.c:667: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:675: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_chunk_get': hddspacemgr.c:802: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:810: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_check_folders': hddspacemgr.c:1070: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:1078: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_emptycrc': hddspacemgr.c:1290: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'chunk_readcrc': hddspacemgr.c:1328: warning: pointer targets in assignment differ in signedness hddspacemgr.c:1343: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'chunk_freecrc': hddspacemgr.c:1355: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_delayed_ops': hddspacemgr.c:1505: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c: In function 'hdd_io_begin': hddspacemgr.c:1608: warning: pointer targets in assignment differ in signedness hddspacemgr.c: In function 'hdd_term': hddspacemgr.c:3669: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness hddspacemgr.c:3677: warning: pointer targets in passing argument 1 of 'munmap' differ in signedness mv -f .deps/mfschunkserver-hddspacemgr.Tpo .deps/mfschunkserver-hddspacemgr.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-masterconn.o -MD -MP -MF .deps/mfschunkserver-masterconn.Tpo -c -o mfschunkserver-masterconn.o `test -f 'masterconn.c' || echo './'`masterconn.c mv -f .deps/mfschunkserver-masterconn.Tpo .deps/mfschunkserver-masterconn.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-replicator.o -MD -MP -MF .deps/mfschunkserver-replicator.Tpo -c -o mfschunkserver-replicator.o `test -f 'replicator.c' || echo './'`replicator.c mv -f .deps/mfschunkserver-replicator.Tpo .deps/mfschunkserver-replicator.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-chartsdata.o -MD -MP -MF .deps/mfschunkserver-chartsdata.Tpo -c -o mfschunkserver-chartsdata.o `test -f 'chartsdata.c' || echo './'`chartsdata.c mv -f .deps/mfschunkserver-chartsdata.Tpo .deps/mfschunkserver-chartsdata.Po gcc -DHAVE_CONFIG_H -I. -I.. -I../mfscommon -DMFSMAXFILES=10000 -D_USE_PTHREADS -DAPPNAME=mfschunkserver -D__EXTENSIONS__ -D_REENTRANT -pthreads -std=c99 -g -O2 -W -Wall -Wshadow -pedantic -MT mfschunkserver-main.o -MD -MP -MF .deps/mfschunkserver-main.Tpo -c -o mfschunkserver-main.o `test -f '../mfscommon/main.c' || echo './'`../mfscommon/main.c ../mfscommon/main.c: In function 'changeugid': ../mfscommon/main.c:548: error: too many arguments to function 'getgrnam_r' ../mfscommon/main.c:561: error: too many arguments to function 'getpwuid_r' ../mfscommon/main.c:569: error: too many arguments to function 'getpwnam_r' make[2]: *** [mfschunkserver-main.o] Error 1 make[2]: Leaving directory `/root/mfs-1.6.20-2/mfschunkserver' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/mfs-1.6.20-2' make: *** [all] Error 2 Ideas/Suggestions/Work-Around? |