You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve <st...@bo...> - 2011-06-09 14:45:02
|
Is it at all possible that you could have more than one active master and they could be added or removed in the same way a chunk server can without affecting availability. Sharing workload ? -------Original Message------- From: Thomas S Hatch Date: 09/06/2011 15:20:07 To: Florent Bautista Cc: moo...@li... Subject: Re: [Moosefs-users] Failover of MFSMaster There are some solutions which involve synchronized virtual machines, many hyper-visors can do this. If you are planning on using the metalogger + mfsrestore there will always be a short failover lag. Personally, what I think would be the best option would be if the metalogger could populate the ram of a secondary read-only master. This is the same essential way that redis does it. The other option would be to set systems similar to mongo's HA solutions. Of course these solutions would require the MooseFS developers to implement them - I think that mfsmaster in-ram replication is something that MooseFS is very close to. Let us know what you decide on, there are a number of individuals using HA solutions for MooseFS. On Thu, Jun 9, 2011 at 5:06 AM, Florent Bautista <fl...@co...> wrote: Hi all, I have a question about failover of mfsmaster. Which solution is the best, for now ? I don't know which solution to choose between : - drdb+heartbeat - mfsmetalogger+mfsrestore What do you think about it ? is there other ways to failover mfsmaster (active sync of RAM between 2 hosts ?) ? -- Florent Bautista Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Is vous n'êtes pas la personne à laquelle ce message EST destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous EST strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. ----------------------------------------------------------------------------- EditLive Enterprise is the world's most technically advanced content authoring tool. Experience the power of Track Changes, Inline Image Editing and ensure content is compliant with Accessibility Checking. http://p.sf.net/sfu/ephox-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ----------------------------------------------------------------------------- EditLive Enterprise is the world's most technically advanced content authoring tool. Experience the power of Track Changes, Inline Image Editing and ensure content is compliant with Accessibility Checking. http://p.sf.net/sfu/ephox-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Thomas S H. <tha...@gm...> - 2011-06-09 14:20:09
|
There are some solutions which involve synchronized virtual machines, many hyper-visors can do this. If you are planning on using the metalogger + mfsrestore there will always be a short failover lag. Personally, what I think would be the best option would be if the metalogger could populate the ram of a secondary read-only master. This is the same essential way that redis does it. The other option would be to set systems similar to mongo's HA solutions. Of course these solutions would require the MooseFS developers to implement them - I think that mfsmaster in-ram replication is something that MooseFS is very close to. Let us know what you decide on, there are a number of individuals using HA solutions for MooseFS. On Thu, Jun 9, 2011 at 5:06 AM, Florent Bautista <fl...@co...>wrote: > Hi all, > > I have a question about failover of mfsmaster. > > Which solution is the best, for now ? > > I don't know which solution to choose between : > > - drdb+heartbeat > - mfsmetalogger+mfsrestore > > What do you think about it ? is there other ways to failover mfsmaster > (active sync of RAM between 2 hosts ?) ? > > -- > > > Florent Bautista > ------------------------------ > > Ce message et ses éventuelles pièces jointes sont personnels, confidentiels > et à l'usage exclusif de leur destinataire. > Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez > noter que vous avez reçu ce courriel par erreur et qu'il vous est > strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou > de copier ce message. > > This e-mail and any attachments hereto are strictly personal, confidential > and intended solely for the addressee. > If you are not the intended recipient, be advised that you have received > this email in error and that any use, dissemination, forwarding, printing, > or copying of this message is strictly prohibited. > ------------------------------ > > > ------------------------------------------------------------------------------ > EditLive Enterprise is the world's most technically advanced content > authoring tool. Experience the power of Track Changes, Inline Image > Editing and ensure content is compliant with Accessibility Checking. > http://p.sf.net/sfu/ephox-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Florent B. <fl...@co...> - 2011-06-09 11:07:01
|
Hi all, I have a question about failover of mfsmaster. Which solution is the best, for now ? I don't know which solution to choose between : - drdb+heartbeat - mfsmetalogger+mfsrestore What do you think about it ? is there other ways to failover mfsmaster (active sync of RAM between 2 hosts ?) ? -- Florent Bautista ------------------------------------------------------------------------ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. ------------------------------------------------------------------------ |
From: Romain G. <rom...@li...> - 2011-06-09 09:34:20
|
Hello , I have one question about snapshots in the last mooseFS version. When I have my snapshot that was created by the makesnapshot command , I would like to use this snapshot but I have not seen in the documentation a command line in order to regenerate my FS like ZFS and his rollback command. cordially, Romain |
From: <wk...@bn...> - 2011-06-09 06:14:58
|
On 6/8/11 2:10 AM, Michal Borychowski wrote: > Hi! > > Thanks for pointing this out. Yes, it would be really important to make some > priorities in the rebalancing process which we put on our to-do list. > > We have this plan for prioritizing: > - high priority (chunks with one copy with goal>1; orange color in CGI > monitor) yes, the quicker we get back to 'at goal', the better. I've seen servers from the same purchase lot go down within days of each other so the more 'insurance' the better. > - middle priority (chunks undergoal and overgoal; yellow and blue color) > - low priority (normal and deleted chunks; green and grey color) > - "special" priority (chunks with 0 copies; red color) > > What is your opinion about it? looks fine though overgoal doesn' bother me that much <grin> . -bill |
From: rxknhe <rx...@gm...> - 2011-06-08 17:02:06
|
Hi there, If it can be given an option as 'on the fly tunable', that would be great. Here is the reason for asking. (1) We like conservative re-balancing approach of MooseFS, because in case of hardware failure or new chunkserver addition, MooseFS won't go crazy and start pounding disks and consuming I/O resource, thus causing i/o starvation for applications using MooseFS share. This works nicely most of the time. (2) However if we replace a dead chunkserver or mark area with '*' (i.e disk area to be replaced) then it takes several days to re-balance. With goals=2, this is a critical moment(when one chunkserver dies down), because if we loose one more chunk server (or disk area), we are on the verge of loosing data. In this case, if MooseFS can be told to speedup re-balancing and use more resources if needed. Hence we can live with goals=2 (i.e more usable space for data, instead of setting goals=3) comfortably. Depending upon nature of application served via MooseFS, admin can decide if willing to take risk with longer re-balance cycle and better i/o for applications or choose re-balance speedup and sacrifice i/o for application for a while. Although to begin with, your approach of speeding up based on "high priority (chunks with one copy with goal>1; orange color in CGI monitor)" and such, is a very welcome enhancement. regards rxknhe 2011/6/8 Michal Borychowski <mic...@ge...> > Hi! > > Thanks for pointing this out. Yes, it would be really important to make > some > priorities in the rebalancing process which we put on our to-do list. > > We have this plan for prioritizing: > - high priority (chunks with one copy with goal>1; orange color in CGI > monitor) > - middle priority (chunks undergoal and overgoal; yellow and blue color) > - low priority (normal and deleted chunks; green and grey color) > - "special" priority (chunks with 0 copies; red color) > > What is your opinion about it? > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > > -----Original Message----- > From: WK [mailto:wk...@bn...] > Sent: Thursday, May 26, 2011 2:41 AM > To: moo...@li... > Subject: Re: [Moosefs-users] Replication Priority? > > On 5/23/2011 3:32 PM, W Kern wrote: > > So now the CGI is showing 10,000+ chunks with a single copy (red), 2 > > million+ chunks are now orange (2 copies) and the system is happily > > increasing the 'green' 3 valid copy column. > > > > The problem is that it seems to be concentrating on the orange (2 copy) > > files and ignoring the 10,000+ red ones that are most at risk. In the > > last hour we've seen a few 'red' chunks > > disappear but the vast majority of activity is occuring in the orange (2 > > copy) column. > > > > Shouldn't the replication worry about the single copy files first? > > > > I also realize we could simply set the goal back to 2 let it finish that > > up and THEN switch it to 3 but I'm curious as to what the community says. > > > > Just a followup, after a day or so we still had "red" single copy > chunks. Obviously the under goal routine doesn't look at how badly under > goal a given chunk is. > > So we dropped the Goal back down to 2 and MFS immediately focused on the > single copy chunks. > > The only problem observed was that shortly after dropping the goal back > down to 2, the mount complained of connection issues and people were > kicked out of their IMAP sessions. > That condition returned to normal less than a minute later and no files > were lost. > > Once the under goal of 2 was completed an hour or so later, we reset the > Goal to 3 and in a few days we should be fully green. In the meantime, > we have at least two copies and are not vulnerable to an additional > failure. > > I would still suggest that the "under goal" routine might want to look > first at those chunks that are more severely out of goal, then go back > and fix the others. Assuming that doesn't impact overall performance. > > -bill > > > ---------------------------------------------------------------------------- > -- > vRanger cuts backup time in half-while increasing security. > With the market-leading solution for virtual backup and recovery, > you get blazing-fast, flexible, and affordable data protection. > Download your free trial now. > http://p.sf.net/sfu/quest-d2dcopy1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > ------------------------------------------------------------------------------ > EditLive Enterprise is the world's most technically advanced content > authoring tool. Experience the power of Track Changes, Inline Image > Editing and ensure content is compliant with Accessibility Checking. > http://p.sf.net/sfu/ephox-dev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: jose m. <let...@us...> - 2011-06-08 16:22:05
|
El mié, 08-06-2011 a las 11:10 +0200, Michal Borychowski escribió: > Hi! > > Thanks for pointing this out. Yes, it would be really important to make some > priorities in the rebalancing process which we put on our to-do list. > > We have this plan for prioritizing: > - high priority (chunks with one copy with goal>1; orange color in CGI > monitor) > - middle priority (chunks undergoal and overgoal; yellow and blue color) > - low priority (normal and deleted chunks; green and grey color) > - "special" priority (chunks with 0 copies; red color) > > What is your opinion about it? > > * Excellent, thank you. |
From: Michal B. <mic...@ge...> - 2011-06-08 09:10:24
|
Hi! Thanks for pointing this out. Yes, it would be really important to make some priorities in the rebalancing process which we put on our to-do list. We have this plan for prioritizing: - high priority (chunks with one copy with goal>1; orange color in CGI monitor) - middle priority (chunks undergoal and overgoal; yellow and blue color) - low priority (normal and deleted chunks; green and grey color) - "special" priority (chunks with 0 copies; red color) What is your opinion about it? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: WK [mailto:wk...@bn...] Sent: Thursday, May 26, 2011 2:41 AM To: moo...@li... Subject: Re: [Moosefs-users] Replication Priority? On 5/23/2011 3:32 PM, W Kern wrote: > So now the CGI is showing 10,000+ chunks with a single copy (red), 2 > million+ chunks are now orange (2 copies) and the system is happily > increasing the 'green' 3 valid copy column. > > The problem is that it seems to be concentrating on the orange (2 copy) > files and ignoring the 10,000+ red ones that are most at risk. In the > last hour we've seen a few 'red' chunks > disappear but the vast majority of activity is occuring in the orange (2 > copy) column. > > Shouldn't the replication worry about the single copy files first? > > I also realize we could simply set the goal back to 2 let it finish that > up and THEN switch it to 3 but I'm curious as to what the community says. > Just a followup, after a day or so we still had "red" single copy chunks. Obviously the under goal routine doesn't look at how badly under goal a given chunk is. So we dropped the Goal back down to 2 and MFS immediately focused on the single copy chunks. The only problem observed was that shortly after dropping the goal back down to 2, the mount complained of connection issues and people were kicked out of their IMAP sessions. That condition returned to normal less than a minute later and no files were lost. Once the under goal of 2 was completed an hour or so later, we reset the Goal to 3 and in a few days we should be fully green. In the meantime, we have at least two copies and are not vulnerable to an additional failure. I would still suggest that the "under goal" routine might want to look first at those chunks that are more severely out of goal, then go back and fix the others. Assuming that doesn't impact overall performance. -bill ---------------------------------------------------------------------------- -- vRanger cuts backup time in half-while increasing security. With the market-leading solution for virtual backup and recovery, you get blazing-fast, flexible, and affordable data protection. Download your free trial now. http://p.sf.net/sfu/quest-d2dcopy1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Piotr T. <pio...@co...> - 2011-06-07 14:32:38
|
You can try with bridge 'forward delay' option (in dom0). Maybe the problem you suffer is that the bridge/interface is in learning state (for forward delay time, default is 15 sec) before the state will change to forwarding. Regards, Piotr Teodorowski On Wednesday 01 of June 2011 23:21:23 Josef wrote: > The network device with a moosefs master is eth1 and the master was > running. But I found out, that there is a huge delay between the network > is up and between it actually starts working. I forgot to mention, that > it is a Xen PV guest, so I guess it takes some time before the Xen host > adds the guest eth to its bridge. So I have solved it by a stupid init > script, that waits 10 seconds and remounts all fuse filesystems: > > echo "Remounting for MFS" > sleep 10 > mount -a -t fuse > > Nicer solution would be to add a delay to ifup script of a network > interface... > > Dne 6/1/11 10:55 AM, Michal Borychowski napsal(a): > >> -----Original Message----- > >> From: Josef [mailto:pe...@p-...] > >> Sent: Tuesday, May 31, 2011 4:41 PM > >> To: moo...@li... > >> Subject: [Moosefs-users] automount in debian > >> > >> Hello, > >> I'm having problems with automount under debian (squeeze). I have > >> this line in my fstab: > >> /opt/mfs/bin/mfsmount /mnt/mfs fuse > >> mfsmaster=172.16.100.2,mfsport=9421,_netdev 0 0 > >> > >> during the startup debian finds out, that it is a net device, so calls > >> if-ip.d/mountmfs, but I fails: > >> > >> Configuring network interfaces...if-up.d/mountnfs[eth0]: waiting for > >> interface eth1 before doing NFS mounts ... (warning). > > > > This message comes from Debian init scripts and means, that eth0 has been > > initialized, but eth1 isn't yet, so mounting of network filesystems is > > delayed until initialization of the remaining network interfaces. > > > >> can't connect to mfsmaster ("172.16.100.2":"9421") > > > > It's mfsmount message, probably occurs after initialization of eth1. > > > > There are a few possible causes, e.g.: > > - master is not running yet > > - given host is not accessible yet (maybe there is some delay between > > initializing network interface and network being accessible?) > > > > > > Kind regards > > Michal Borychowski > > --------------------------------------------------------------------------- > --- Simplify data backup and recovery for your virtual environment with > vRanger. Installation's a snap, and flexible recovery options mean your > data is safe, secure and there when you need it. Data protection magic? > Nope - It's vRanger. Get your free trial download today. > http://p.sf.net/sfu/quest-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Stas O. <sta...@gm...> - 2011-06-07 12:33:53
|
Hi. Thanks, that was actually the issue. The error was hidden by the init.d script (visible only via console), and didn't appear in logs - will be good if it could be duplicated in logs as well. Regards. On Tue, Jun 7, 2011 at 2:55 AM, Robert Dye <ro...@in...> wrote: > If this is a fresh installation, you might try renaming to the > metadata.mfs.sample file located under /usr/local/lib/mfs (or respective to > your distribution). > > > > Double check your configuration files, the examples should help get you > started. > > > > -Rob > > > ------------------------------ > > *From:* Stas Oskin [mailto:sta...@gm...] > *Sent:* Monday, June 06, 2011 1:00 PM > *To:* moosefs-users > *Subject:* [Moosefs-users] can't open metadata file error on new > installation > > > > Hi. > > I'm trying to set a new moosefs cluster, and getting these errors during > mfsmaster start-up: > > Jun 6 19:54:51 ovsw1 mfsmaster[19248]: set gid to 2 > Jun 6 19:54:51 ovsw1 mfsmaster[19248]: set uid to 2 > Jun 6 19:54:51 ovsw1 mfsmaster[19248]: can't load sessions, fopen error: > ENOENT (No such file or directory) > Jun 6 19:54:51 ovsw1 mfsmaster[19248]: exports file has been loaded > Jun 6 19:54:51 ovsw1 mfsmaster[19248]: can't open metadata file > Jun 6 19:54:51 ovsw1 mfsmaster[19248]: init: file system manager failed > !!! > > Any idea what is missing for cluster to run? > > I'm using the latest 1.20 version. > > Thanks. > |
From: Stas O. <sta...@gm...> - 2011-06-06 20:00:23
|
Hi. I'm trying to set a new moosefs cluster, and getting these errors during mfsmaster start-up: Jun 6 19:54:51 ovsw1 mfsmaster[19248]: set gid to 2 Jun 6 19:54:51 ovsw1 mfsmaster[19248]: set uid to 2 Jun 6 19:54:51 ovsw1 mfsmaster[19248]: can't load sessions, fopen error: ENOENT (No such file or directory) Jun 6 19:54:51 ovsw1 mfsmaster[19248]: exports file has been loaded Jun 6 19:54:51 ovsw1 mfsmaster[19248]: can't open metadata file Jun 6 19:54:51 ovsw1 mfsmaster[19248]: init: file system manager failed !!! Any idea what is missing for cluster to run? I'm using the latest 1.20 version. Thanks. |
From: rxknhe <rx...@gm...> - 2011-06-06 18:01:18
|
In /etc/fstab put a line like below. (Assuming mount point as /net/mfs/foo and replace other parameters below for your setup) mfsmount /net/mfs/foo fuse mfsmaster=mfsmaster-foo.example.com,mfssubfolder=/ 0 0 then in /etc/rc.local put a line *mount /net/mfs/foo* (or you can put a mfsmount command directly in /etc/rc.local, that is executed last in the boot process after all init scripts.) ------ Other option is to use automounter. This will automatically mount a filesystem when accessed a path. Corresponding Automounter setup example here to mount /net/mfs/foo File /etc/auto.master:: /net/mfs /etc/auto.mfs --timeout=120 File /etc/auto.mfs:: foo -fstype=fuse,mfsmaster=mfsmaster-foo.example.com,mfssubfolder=/ \:mfsmount On Mon, Jun 6, 2011 at 1:44 PM, Stas Oskin <sta...@gm...> wrote: > Hi. > > Can you give some example, or point me to any resource explaining how to do > it? > > Thanks. > > > On Mon, Jun 6, 2011 at 8:38 PM, Ricardo J. Barberis < > ric...@da...> wrote: > >> El Lunes 06 Junio 2011, Stas Oskin escribió: >> >> > Hi. >> > >> > I have chunk-servers working as client of themselves, and I have an >> issue >> > of mounting the file system via fstab, as it happens before the >> > chunk-server was launched. >> > >> > What would be the best method of re-mounting the fstab, after the >> > chunk-server came up? >> > >> > Thanks. >> >> You could modify /etc/fstab so the filesystem doesn't get mounted by >> default >> and mount it from /etc/rc.local or similar. >> >> Regards, >> -- >> Ricardo J. Barberis >> Senior SysAdmin / ITI >> Dattatec.com :: Soluciones de Web Hosting >> Tu Hosting hecho Simple! >> >> >> ------------------------------------------------------------------------------ >> Simplify data backup and recovery for your virtual environment with >> vRanger. >> Installation's a snap, and flexible recovery options mean your data is >> safe, >> secure and there when you need it. Discover what all the cheering's about. >> Get your free trial download today. >> http://p.sf.net/sfu/quest-dev2dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > > > > ------------------------------------------------------------------------------ > Simplify data backup and recovery for your virtual environment with > vRanger. > Installation's a snap, and flexible recovery options mean your data is > safe, > secure and there when you need it. Discover what all the cheering's about. > Get your free trial download today. > http://p.sf.net/sfu/quest-dev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Stas O. <sta...@gm...> - 2011-06-06 17:44:47
|
Hi. Can you give some example, or point me to any resource explaining how to do it? Thanks. On Mon, Jun 6, 2011 at 8:38 PM, Ricardo J. Barberis < ric...@da...> wrote: > El Lunes 06 Junio 2011, Stas Oskin escribió: > > Hi. > > > > I have chunk-servers working as client of themselves, and I have an issue > > of mounting the file system via fstab, as it happens before the > > chunk-server was launched. > > > > What would be the best method of re-mounting the fstab, after the > > chunk-server came up? > > > > Thanks. > > You could modify /etc/fstab so the filesystem doesn't get mounted by > default > and mount it from /etc/rc.local or similar. > > Regards, > -- > Ricardo J. Barberis > Senior SysAdmin / ITI > Dattatec.com :: Soluciones de Web Hosting > Tu Hosting hecho Simple! > > > ------------------------------------------------------------------------------ > Simplify data backup and recovery for your virtual environment with > vRanger. > Installation's a snap, and flexible recovery options mean your data is > safe, > secure and there when you need it. Discover what all the cheering's about. > Get your free trial download today. > http://p.sf.net/sfu/quest-dev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Ricardo J. B. <ric...@da...> - 2011-06-06 17:38:14
|
El Lunes 06 Junio 2011, Stas Oskin escribió: > Hi. > > I have chunk-servers working as client of themselves, and I have an issue > of mounting the file system via fstab, as it happens before the > chunk-server was launched. > > What would be the best method of re-mounting the fstab, after the > chunk-server came up? > > Thanks. You could modify /etc/fstab so the filesystem doesn't get mounted by default and mount it from /etc/rc.local or similar. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: Stas O. <sta...@gm...> - 2011-06-06 16:57:25
|
Hi. I have chunk-servers working as client of themselves, and I have an issue of mounting the file system via fstab, as it happens before the chunk-server was launched. What would be the best method of re-mounting the fstab, after the chunk-server came up? Thanks. |
From: Jean-Baptiste <jb...@jb...> - 2011-06-06 09:59:09
|
2011/6/6 Michal Borychowski <mic...@ge...>: > Unfortunately when everybody started to commit to their branches requesting > us to test, merge, publish, etc., it would take from us lots of energy and > we would have less time for developing the code. So please understand our > point of view where the stability is our main goal. I respect that. Thank your for your official answer. |
From: Florent B. <fl...@co...> - 2011-06-06 09:07:54
|
Hi, Thank you for this information :) Le 06/06/2011 10:39, Michal Borychowski a écrit : > > Hi! > > When a small file is saved for a first time in the system, it occupies > the same size as its “length”. But upon copying or replicating a small > file is filled at the end with zeros “0” up to 64KiB. That’s why you > have different usage on the other chunk servers. > > You can check it by summing up the lenghts of files on the > chunkservers, with something like: > > find ... -name '*.mfs' | xargs ls -l | awk '{ a+=$5; } END { print a;}') > > Regards > > -Michał > -- Florent Bautista ------------------------------------------------------------------------ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. ------------------------------------------------------------------------ |
From: Michal B. <mic...@ge...> - 2011-06-06 08:33:01
|
Hi! For the moment you need to remount the client(s). We'll think about forcing mfs remount from the master level (by mastertools). Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: lcfc2316 [mailto:lcf...@16...] Sent: Friday, June 03, 2011 10:47 AM To: moosefs-users Subject: [Moosefs-users] Is there any way that I don't need to re-mount mfsclient Hi supports, I have changed a mfsclient from "rw" to "ro" from file 'mfsexports.cfg", restart mfsmaster. But from WEB monitor ,it doesn't change ,still "rw".After I re-mount mfsclient, I get "ro". I have already mount the client to mfsmaster before I change . Is there any way that I don't need to re-mount mfsclient ? Version:1.6.20 OS:Red Hat Enterprise Linux Server release 5.3 X86_64(all three) Files System:ext3 Best regards! _____ 2011-06-03 |
From: Michal B. <mic...@ge...> - 2011-06-06 08:30:08
|
Hi! Let me speak about the git repositories and the code maintenance. The only official git repository is hosted at SourceForge.net. Although MooseFS is open source we feel highly responsible for its stability and quality - we make extensive tests in our development and production environments and only after that we release a new version. Your contributions are more than welcome but we prefer to get them in the form of patches which we can easily analyze and test. Usually a patch contributed by open community is "finetuned" by our team and gets published in the next release. Unfortunately when everybody started to commit to their branches requesting us to test, merge, publish, etc., it would take from us lots of energy and we would have less time for developing the code. So please understand our point of view where the stability is our main goal. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Sunday, June 05, 2011 3:18 AM To: Jean-Baptiste Cc: moo...@li... Subject: Re: [Moosefs-users] git source code repository On Sat, 4 Jun 2011 15:05:50 +0000 Jean-Baptiste <jb...@jb...> wrote: > On Tue, May 31, 2011 at 2:13 PM, Anh K. Huynh <ky...@vi...> > wrote: > > > IMHO, Git allows developers to work *easily* on private branches, > > so we may not see any activities on the repository. BTW, this is > > my opinion; this isn't official reply from MooseFS team. > > Thank you for your answer. Is that possible to have an official > reply from the MooseFS about those private branches ? It could be > quite frustrating to develop something on the public trunk and > having a difficult merge time when a new release pops in the public > git repository. > > I'm not saying anything wrong about the MooseFS team here, i think > it's perfectly understandable to have non-public working repository, > but maybe it will encourage people to contribute if they can have an > eye on the work in progress =) BTW, the IRC channel #moosefs is quite active and open as I know. Feel to join it :) I love the guys there! -- Anh Ky Huynh @ ICT Registered Linux User #392115 ---------------------------------------------------------------------------- -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Discover what all the cheering's about. Get your free trial download today. http://p.sf.net/sfu/quest-dev2dev2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Tuukka L. <tlu...@gm...> - 2011-06-06 06:17:17
|
So the chunk servers in effect do not know what files they have. Only the master is aware? I looked at the chunks themselves and they don't seem to be particularly special, I was able to identify some of the files inside the chunks simply by using head/tail/cat commands on them, so it would seem like it would not be hard for each chunkserver to be at least minimally aware of what is in themselves, to remove some dependency on the master. Not sure how other people feel, but if I knew that each chunkserver can tell the master what it has, it would make me feel better that there is a way to recover the contents of the file system in case of a extreme failure, specially in small implementations. I realize in larger implementations it may be impractical to ever recover the meta data from the chunkservers, simply for the time it might take. So having very reliable masters and backup masters would be a key. Thanks. 2011/6/5 Michal Borychowski <mic...@ge...>: > If you really don't have your metadata, your files are dead... You need > metadata to know about them. Of course you can try to recover by reading the > surface of hard drives on chunkservers but this would be very tedious... > > > Regards > Michał > > -----Original Message----- > From: Tuukka Luolamo [mailto:tlu...@gm...] > Sent: Monday, June 06, 2011 7:56 AM > To: Michal Borychowski > Cc: moo...@li... > Subject: Re: [Moosefs-users] Problems after power failure > > Well I guess the question is in the impossible situation that the > metadata were to be completely corrupt or lost because there is no > backup and master goes dead, what recourse is there if any? > > Thanks > > 2011/6/5 Michal Borychowski <mic...@ge...>: >> Hi! >> >> That's interesting... So this bit could be changed by your CPU or >> motherboard or it is an error in the software but it would be very very >> difficult to find it as the error probably cannot be easily repeated. >> >> Regarding your previous questions - it's almost impossible that your >> metadata is "completely" corrupt. You really can recover most of your > files >> at most times. This situation was really weird as there was a single > change >> in an information bit. Normally you would run mfsmetarestore with a flag > -i >> (ignore) and it would just ignore this one file. Unfortunately you would > not >> be able to repair this single bit as this was quite a complicated process. >> >> >> Kind regards >> Michał >> >> >> -----Original Message----- >> From: Tuukka Luolamo [mailto:tlu...@gm...] >> Sent: Monday, June 06, 2011 5:01 AM >> To: Michal Borychowski; moo...@li... >> Subject: Re: [Moosefs-users] Problems after power failure >> >> OK I ran the memtest on the master server, It got through without >> finding any errors. >> >> 2011/6/2 Michal Borychowski <mic...@ge...>: >>> Hi! >>> >>> In the meantime please run the memtest - we are curious if it really was >>> hardware problem or maybe it could be a software problem >>> >>> >>> Regards >>> Michal >>> >>> -----Original Message----- >>> From: Tuukka Luolamo [mailto:tlu...@gm...] >>> Sent: Thursday, June 02, 2011 9:55 AM >>> To: WK >>> Cc: moo...@li... >>> Subject: Re: [Moosefs-users] Problems after power failure >>> >>> OK I put in place the file the dev sent me and can not see any data >> loss... >>> >>> I found the file in question the one I got in the error and it seems > fine. >>> >>> The whole system is up and functioning. >>> >>> I run the system on a old desktop computer and a another PC I bought >>> for $25 so the dev recommends making sure you have good memory, but I >>> guess I am using whatever I got =) Aside from this error everything >>> has been fine. Didn't run the memtest they recommended, but I would >>> not count out memory errors. >>> >>> However I would like to understand the situation better mainly for >>> what are my recourses. As WK articulated already had my metadata been >>> completely corrupt would I have lost all my data? Would I lose just >>> one file the one with the error? And can I fix this error myself? >>> >>> Thanks, >>> >>> Tuukka >>> >>> On Wed, Jun 1, 2011 at 3:39 PM, WK <wk...@bn...> wrote: >>>> On 6/1/2011 2:30 AM, Michal Borychowski wrote: >>>>> >>>>> We think that this problem could be caused by your RAM in the master. > We >>>>> recommend using RAM with parity control. You can also run a test from >>>>> http://www.memtest.org/ on your server and check your existing RAM. Of >>>>> course, the bit could have been changed also on the motherboard level > or >>> CPU >>>>> - which is much less probable. >>>>> >>>>> >>>>> Also you can see in the log that file 7538 is located between 7553 and >>> 7555: >>>> >>>> >>>> So in a situation like this where the metadata is now corrupt. >>>> >>>> Is the problem fixable with only the loss of the one file? (and how does >>>> one fix it). >>>> >>>> or is his entire MFS setup completely corrupt and he would need to have >>>> had a backup? >>>> >>>> Can I assume that older archived versions of the metadata.mfs could be >>>> used to recover most of the files. >>>> >>>> -bill >>>> >>>> >>> >> > ---------------------------------------------------------------------------- >>> -- >>>> Simplify data backup and recovery for your virtual environment with >>> vRanger. >>>> Installation's a snap, and flexible recovery options mean your data is >>> safe, >>>> secure and there when you need it. Data protection magic? >>>> Nope - It's vRanger. Get your free trial download today. >>>> http://p.sf.net/sfu/quest-sfdev2dev >>>> _______________________________________________ >>>> moosefs-users mailing list >>>> moo...@li... >>>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>>> >>> >>> >> > ---------------------------------------------------------------------------- >>> -- >>> Simplify data backup and recovery for your virtual environment with >> vRanger. >>> >>> Installation's a snap, and flexible recovery options mean your data is >> safe, >>> secure and there when you need it. Data protection magic? >>> Nope - It's vRanger. Get your free trial download today. >>> http://p.sf.net/sfu/quest-sfdev2dev >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >>> >> >> > ---------------------------------------------------------------------------- >> -- >> Simplify data backup and recovery for your virtual environment with > vRanger. >> Installation's a snap, and flexible recovery options mean your data is > safe, >> secure and there when you need it. Discover what all the cheering's about. >> Get your free trial download today. >> http://p.sf.net/sfu/quest-dev2dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > |
From: Michal B. <mic...@ge...> - 2011-06-06 06:01:41
|
If you really don't have your metadata, your files are dead... You need metadata to know about them. Of course you can try to recover by reading the surface of hard drives on chunkservers but this would be very tedious... Regards Michał -----Original Message----- From: Tuukka Luolamo [mailto:tlu...@gm...] Sent: Monday, June 06, 2011 7:56 AM To: Michal Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Problems after power failure Well I guess the question is in the impossible situation that the metadata were to be completely corrupt or lost because there is no backup and master goes dead, what recourse is there if any? Thanks 2011/6/5 Michal Borychowski <mic...@ge...>: > Hi! > > That's interesting... So this bit could be changed by your CPU or > motherboard or it is an error in the software but it would be very very > difficult to find it as the error probably cannot be easily repeated. > > Regarding your previous questions - it's almost impossible that your > metadata is "completely" corrupt. You really can recover most of your files > at most times. This situation was really weird as there was a single change > in an information bit. Normally you would run mfsmetarestore with a flag -i > (ignore) and it would just ignore this one file. Unfortunately you would not > be able to repair this single bit as this was quite a complicated process. > > > Kind regards > Michał > > > -----Original Message----- > From: Tuukka Luolamo [mailto:tlu...@gm...] > Sent: Monday, June 06, 2011 5:01 AM > To: Michal Borychowski; moo...@li... > Subject: Re: [Moosefs-users] Problems after power failure > > OK I ran the memtest on the master server, It got through without > finding any errors. > > 2011/6/2 Michal Borychowski <mic...@ge...>: >> Hi! >> >> In the meantime please run the memtest - we are curious if it really was >> hardware problem or maybe it could be a software problem >> >> >> Regards >> Michal >> >> -----Original Message----- >> From: Tuukka Luolamo [mailto:tlu...@gm...] >> Sent: Thursday, June 02, 2011 9:55 AM >> To: WK >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] Problems after power failure >> >> OK I put in place the file the dev sent me and can not see any data > loss... >> >> I found the file in question the one I got in the error and it seems fine. >> >> The whole system is up and functioning. >> >> I run the system on a old desktop computer and a another PC I bought >> for $25 so the dev recommends making sure you have good memory, but I >> guess I am using whatever I got =) Aside from this error everything >> has been fine. Didn't run the memtest they recommended, but I would >> not count out memory errors. >> >> However I would like to understand the situation better mainly for >> what are my recourses. As WK articulated already had my metadata been >> completely corrupt would I have lost all my data? Would I lose just >> one file the one with the error? And can I fix this error myself? >> >> Thanks, >> >> Tuukka >> >> On Wed, Jun 1, 2011 at 3:39 PM, WK <wk...@bn...> wrote: >>> On 6/1/2011 2:30 AM, Michal Borychowski wrote: >>>> >>>> We think that this problem could be caused by your RAM in the master. We >>>> recommend using RAM with parity control. You can also run a test from >>>> http://www.memtest.org/ on your server and check your existing RAM. Of >>>> course, the bit could have been changed also on the motherboard level or >> CPU >>>> - which is much less probable. >>>> >>>> >>>> Also you can see in the log that file 7538 is located between 7553 and >> 7555: >>> >>> >>> So in a situation like this where the metadata is now corrupt. >>> >>> Is the problem fixable with only the loss of the one file? (and how does >>> one fix it). >>> >>> or is his entire MFS setup completely corrupt and he would need to have >>> had a backup? >>> >>> Can I assume that older archived versions of the metadata.mfs could be >>> used to recover most of the files. >>> >>> -bill >>> >>> >> > ---------------------------------------------------------------------------- >> -- >>> Simplify data backup and recovery for your virtual environment with >> vRanger. >>> Installation's a snap, and flexible recovery options mean your data is >> safe, >>> secure and there when you need it. Data protection magic? >>> Nope - It's vRanger. Get your free trial download today. >>> http://p.sf.net/sfu/quest-sfdev2dev >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> > ---------------------------------------------------------------------------- >> -- >> Simplify data backup and recovery for your virtual environment with > vRanger. >> >> Installation's a snap, and flexible recovery options mean your data is > safe, >> secure and there when you need it. Data protection magic? >> Nope - It's vRanger. Get your free trial download today. >> http://p.sf.net/sfu/quest-sfdev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > ---------------------------------------------------------------------------- > -- > Simplify data backup and recovery for your virtual environment with vRanger. > Installation's a snap, and flexible recovery options mean your data is safe, > secure and there when you need it. Discover what all the cheering's about. > Get your free trial download today. > http://p.sf.net/sfu/quest-dev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Tuukka L. <tlu...@gm...> - 2011-06-06 05:56:19
|
Well I guess the question is in the impossible situation that the metadata were to be completely corrupt or lost because there is no backup and master goes dead, what recourse is there if any? Thanks 2011/6/5 Michal Borychowski <mic...@ge...>: > Hi! > > That's interesting... So this bit could be changed by your CPU or > motherboard or it is an error in the software but it would be very very > difficult to find it as the error probably cannot be easily repeated. > > Regarding your previous questions - it's almost impossible that your > metadata is "completely" corrupt. You really can recover most of your files > at most times. This situation was really weird as there was a single change > in an information bit. Normally you would run mfsmetarestore with a flag -i > (ignore) and it would just ignore this one file. Unfortunately you would not > be able to repair this single bit as this was quite a complicated process. > > > Kind regards > Michał > > > -----Original Message----- > From: Tuukka Luolamo [mailto:tlu...@gm...] > Sent: Monday, June 06, 2011 5:01 AM > To: Michal Borychowski; moo...@li... > Subject: Re: [Moosefs-users] Problems after power failure > > OK I ran the memtest on the master server, It got through without > finding any errors. > > 2011/6/2 Michal Borychowski <mic...@ge...>: >> Hi! >> >> In the meantime please run the memtest - we are curious if it really was >> hardware problem or maybe it could be a software problem >> >> >> Regards >> Michal >> >> -----Original Message----- >> From: Tuukka Luolamo [mailto:tlu...@gm...] >> Sent: Thursday, June 02, 2011 9:55 AM >> To: WK >> Cc: moo...@li... >> Subject: Re: [Moosefs-users] Problems after power failure >> >> OK I put in place the file the dev sent me and can not see any data > loss... >> >> I found the file in question the one I got in the error and it seems fine. >> >> The whole system is up and functioning. >> >> I run the system on a old desktop computer and a another PC I bought >> for $25 so the dev recommends making sure you have good memory, but I >> guess I am using whatever I got =) Aside from this error everything >> has been fine. Didn't run the memtest they recommended, but I would >> not count out memory errors. >> >> However I would like to understand the situation better mainly for >> what are my recourses. As WK articulated already had my metadata been >> completely corrupt would I have lost all my data? Would I lose just >> one file the one with the error? And can I fix this error myself? >> >> Thanks, >> >> Tuukka >> >> On Wed, Jun 1, 2011 at 3:39 PM, WK <wk...@bn...> wrote: >>> On 6/1/2011 2:30 AM, Michal Borychowski wrote: >>>> >>>> We think that this problem could be caused by your RAM in the master. We >>>> recommend using RAM with parity control. You can also run a test from >>>> http://www.memtest.org/ on your server and check your existing RAM. Of >>>> course, the bit could have been changed also on the motherboard level or >> CPU >>>> - which is much less probable. >>>> >>>> >>>> Also you can see in the log that file 7538 is located between 7553 and >> 7555: >>> >>> >>> So in a situation like this where the metadata is now corrupt. >>> >>> Is the problem fixable with only the loss of the one file? (and how does >>> one fix it). >>> >>> or is his entire MFS setup completely corrupt and he would need to have >>> had a backup? >>> >>> Can I assume that older archived versions of the metadata.mfs could be >>> used to recover most of the files. >>> >>> -bill >>> >>> >> > ---------------------------------------------------------------------------- >> -- >>> Simplify data backup and recovery for your virtual environment with >> vRanger. >>> Installation's a snap, and flexible recovery options mean your data is >> safe, >>> secure and there when you need it. Data protection magic? >>> Nope - It's vRanger. Get your free trial download today. >>> http://p.sf.net/sfu/quest-sfdev2dev >>> _______________________________________________ >>> moosefs-users mailing list >>> moo...@li... >>> https://lists.sourceforge.net/lists/listinfo/moosefs-users >>> >> >> > ---------------------------------------------------------------------------- >> -- >> Simplify data backup and recovery for your virtual environment with > vRanger. >> >> Installation's a snap, and flexible recovery options mean your data is > safe, >> secure and there when you need it. Data protection magic? >> Nope - It's vRanger. Get your free trial download today. >> http://p.sf.net/sfu/quest-sfdev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> >> > > ---------------------------------------------------------------------------- > -- > Simplify data backup and recovery for your virtual environment with vRanger. > Installation's a snap, and flexible recovery options mean your data is safe, > secure and there when you need it. Discover what all the cheering's about. > Get your free trial download today. > http://p.sf.net/sfu/quest-dev2dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Michal B. <mic...@ge...> - 2011-06-06 05:47:12
|
Hi! That's interesting... So this bit could be changed by your CPU or motherboard or it is an error in the software but it would be very very difficult to find it as the error probably cannot be easily repeated. Regarding your previous questions - it's almost impossible that your metadata is "completely" corrupt. You really can recover most of your files at most times. This situation was really weird as there was a single change in an information bit. Normally you would run mfsmetarestore with a flag -i (ignore) and it would just ignore this one file. Unfortunately you would not be able to repair this single bit as this was quite a complicated process. Kind regards Michał -----Original Message----- From: Tuukka Luolamo [mailto:tlu...@gm...] Sent: Monday, June 06, 2011 5:01 AM To: Michal Borychowski; moo...@li... Subject: Re: [Moosefs-users] Problems after power failure OK I ran the memtest on the master server, It got through without finding any errors. 2011/6/2 Michal Borychowski <mic...@ge...>: > Hi! > > In the meantime please run the memtest - we are curious if it really was > hardware problem or maybe it could be a software problem > > > Regards > Michal > > -----Original Message----- > From: Tuukka Luolamo [mailto:tlu...@gm...] > Sent: Thursday, June 02, 2011 9:55 AM > To: WK > Cc: moo...@li... > Subject: Re: [Moosefs-users] Problems after power failure > > OK I put in place the file the dev sent me and can not see any data loss... > > I found the file in question the one I got in the error and it seems fine. > > The whole system is up and functioning. > > I run the system on a old desktop computer and a another PC I bought > for $25 so the dev recommends making sure you have good memory, but I > guess I am using whatever I got =) Aside from this error everything > has been fine. Didn't run the memtest they recommended, but I would > not count out memory errors. > > However I would like to understand the situation better mainly for > what are my recourses. As WK articulated already had my metadata been > completely corrupt would I have lost all my data? Would I lose just > one file the one with the error? And can I fix this error myself? > > Thanks, > > Tuukka > > On Wed, Jun 1, 2011 at 3:39 PM, WK <wk...@bn...> wrote: >> On 6/1/2011 2:30 AM, Michal Borychowski wrote: >>> >>> We think that this problem could be caused by your RAM in the master. We >>> recommend using RAM with parity control. You can also run a test from >>> http://www.memtest.org/ on your server and check your existing RAM. Of >>> course, the bit could have been changed also on the motherboard level or > CPU >>> - which is much less probable. >>> >>> >>> Also you can see in the log that file 7538 is located between 7553 and > 7555: >> >> >> So in a situation like this where the metadata is now corrupt. >> >> Is the problem fixable with only the loss of the one file? (and how does >> one fix it). >> >> or is his entire MFS setup completely corrupt and he would need to have >> had a backup? >> >> Can I assume that older archived versions of the metadata.mfs could be >> used to recover most of the files. >> >> -bill >> >> > ---------------------------------------------------------------------------- > -- >> Simplify data backup and recovery for your virtual environment with > vRanger. >> Installation's a snap, and flexible recovery options mean your data is > safe, >> secure and there when you need it. Data protection magic? >> Nope - It's vRanger. Get your free trial download today. >> http://p.sf.net/sfu/quest-sfdev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > ---------------------------------------------------------------------------- > -- > Simplify data backup and recovery for your virtual environment with vRanger. > > Installation's a snap, and flexible recovery options mean your data is safe, > secure and there when you need it. Data protection magic? > Nope - It's vRanger. Get your free trial download today. > http://p.sf.net/sfu/quest-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ---------------------------------------------------------------------------- -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Discover what all the cheering's about. Get your free trial download today. http://p.sf.net/sfu/quest-dev2dev2 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Tuukka L. <tlu...@gm...> - 2011-06-06 03:01:23
|
OK I ran the memtest on the master server, It got through without finding any errors. 2011/6/2 Michal Borychowski <mic...@ge...>: > Hi! > > In the meantime please run the memtest - we are curious if it really was > hardware problem or maybe it could be a software problem > > > Regards > Michal > > -----Original Message----- > From: Tuukka Luolamo [mailto:tlu...@gm...] > Sent: Thursday, June 02, 2011 9:55 AM > To: WK > Cc: moo...@li... > Subject: Re: [Moosefs-users] Problems after power failure > > OK I put in place the file the dev sent me and can not see any data loss... > > I found the file in question the one I got in the error and it seems fine. > > The whole system is up and functioning. > > I run the system on a old desktop computer and a another PC I bought > for $25 so the dev recommends making sure you have good memory, but I > guess I am using whatever I got =) Aside from this error everything > has been fine. Didn't run the memtest they recommended, but I would > not count out memory errors. > > However I would like to understand the situation better mainly for > what are my recourses. As WK articulated already had my metadata been > completely corrupt would I have lost all my data? Would I lose just > one file the one with the error? And can I fix this error myself? > > Thanks, > > Tuukka > > On Wed, Jun 1, 2011 at 3:39 PM, WK <wk...@bn...> wrote: >> On 6/1/2011 2:30 AM, Michal Borychowski wrote: >>> >>> We think that this problem could be caused by your RAM in the master. We >>> recommend using RAM with parity control. You can also run a test from >>> http://www.memtest.org/ on your server and check your existing RAM. Of >>> course, the bit could have been changed also on the motherboard level or > CPU >>> - which is much less probable. >>> >>> >>> Also you can see in the log that file 7538 is located between 7553 and > 7555: >> >> >> So in a situation like this where the metadata is now corrupt. >> >> Is the problem fixable with only the loss of the one file? (and how does >> one fix it). >> >> or is his entire MFS setup completely corrupt and he would need to have >> had a backup? >> >> Can I assume that older archived versions of the metadata.mfs could be >> used to recover most of the files. >> >> -bill >> >> > ---------------------------------------------------------------------------- > -- >> Simplify data backup and recovery for your virtual environment with > vRanger. >> Installation's a snap, and flexible recovery options mean your data is > safe, >> secure and there when you need it. Data protection magic? >> Nope - It's vRanger. Get your free trial download today. >> http://p.sf.net/sfu/quest-sfdev2dev >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> > > ---------------------------------------------------------------------------- > -- > Simplify data backup and recovery for your virtual environment with vRanger. > > Installation's a snap, and flexible recovery options mean your data is safe, > secure and there when you need it. Data protection magic? > Nope - It's vRanger. Get your free trial download today. > http://p.sf.net/sfu/quest-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: rxknhe <rx...@gm...> - 2011-06-06 02:59:38
|
- mfsinfo or mfsstats (?) - returning how many files, folders, missing chunks, etc. there are This is a great idea. Currently we monitor MFS cluster status by web page scrapping. Tool like this is going to help a lot to write direct nagios plugins for monitoring. If tool can also provide a list of under goal chunks (typically 'red' status from Cgi gui), that is essential for monitoring as well. Also once chunks are missing (due to under goal chunks found), if tool can provide missing chunk to the file information, that would be great. rxknhe |