You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bán M. <ba...@vo...> - 2010-09-15 12:22:51
|
Hi, we recently started to use it in University of Debrecen, Hungary on 7 machines ~ 3Tb data. Miklos On Wed, 15 Sep 2010 13:50:47 +0200 Ioannis Aslanidis <ias...@fl...> wrote: > Hello, > > I am wondering if there is any list of companies that use MooseFS for > their infrastructure. I need to know how widely spread MooseFS is > before being able to implement it. Can anyone provide info on that? > > Thank you. > |
From: Ioannis A. <ias...@fl...> - 2010-09-15 11:51:19
|
Hello, I am wondering if there is any list of companies that use MooseFS for their infrastructure. I need to know how widely spread MooseFS is before being able to implement it. Can anyone provide info on that? Thank you. -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |
From: Alexander A. <akh...@ri...> - 2010-09-15 11:48:58
|
Hi all! As far as I understand "Location awareness" is not exactly what Ioannis expects. In scenario when whole POP1 goes down and Metadata server was located in POP1 we have data ambiguity because POP1 may be just disconnected by WAN network failure and real Metadata server may be alive. So, in this case in POP2 we can't promote Metalogger to Master role. wbr Alexander Akhobadze ====================================================== Вы писали 8 сентября 2010 г., 21:44:50: ====================================================== According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). ----- Original Message ----- From: "Ioannis Aslanidis" <ias...@fl...> To: moo...@li... Sent: Wednesday, September 8, 2010 11:37:08 AM Subject: [Moosefs-users] Grouping chunk servers Hello, I am testing out MooseFS for around 50 to 100 TeraBytes of data. I have been successful to set up the whole environment. It was pretty quick and easy actually. I was able to replicate with goal=3 and it worked really nicely. At this point, there is only one requirement that I was not able to accomplish. I require to have 3 copies of a certain chunk, but my storage machines are distributed in two points of presence. I require that each of the points of presence contains at least one copy of the chunks. This is fine when you have 3 chunk servers, but it won't work if you have 6 chunk servers. The scenario is the following: POP1: 4 chunk servers (need 2 replicas here) POP2: 2 chunk servers (need 1 replica here) I need this because if the whole POP1 or the whole POP2 go down, I need to still be able to access the contents. Writes are normally only performed in POP1, so there are normally only reads in POP2. The situation is worse if I add 2 more chunk servers in POP1 and 1 more chunk server in POP2. Is there a way to somehow tell MooseFS that the 4 chunk servers of POP1 are in one group and that there should be at least 1 replica in this group and that the 2 chunk servers of POP2 are in another group and that there should be at least 1 replica in this group? Is there any way to accomplish this? Regards. |
From: Anh K. H. <ky...@vi...> - 2010-09-14 03:51:58
|
On Mon, 13 Sep 2010 23:14:14 +0700 Cuong Hoang Bui <bhc...@gm...> wrote: > > I'm really care about configuring master and backup failover for > MooseFS. Oops... Are you Vietnamese? It's very nice to see you here, on MFS list :P I am using it, too. MFS is very nice. Cheers, -- Anh Ky Huynh |
From: Ioannis A. <ias...@fl...> - 2010-09-13 16:41:09
|
You may want to use keepalive instead of ucarp. In any case, you tell either of them that when the condition triggers, they have to run a set of actions. See http://www.keepalived.org/documentation.html for details. On Mon, Sep 13, 2010 at 6:14 PM, Cuong Hoang Bui <bhc...@gm...> wrote: > Hi, > > In the section "HOW TO PREPARE A FAIL PROOF SOLUTION WITH A REDUNDANT > MASTER?", I'm not clear about configuration. Please tell me details. > With 2 servers A, B, I install mfsmaster on both. > - Configure A as master (CARP master), let mfsmaster running. > - Let mfsmaster stop on B. Install metalogger on B, let it running. > > There are some cases as below. > 1. Network connectivity on A broken, then CARP on B becomes master, but > mfsmaster not running. How do I start it automatically in this case? > > 2. Network connectivity on A is okay but mfsmaster process is down. What > should I do in this case? How do I monitor mfsmaster asynchronous so > that I will shutdown CARP interface to switch to backup automatically? > I dont want to use cron job to poll mfsmaster process. > > Please help me show some solutions for these cases. > > Besides, when I read this section, I'm not clear about this paragraph > === > You also need an extra script run continuously testing the CARP > interface state which in case if this interface goes in a MASTER mode > would get two or three newest "changelog" files from any chunkserver > (just by using "scp"), would also start mfsmetarestore and then > mfsmaster. The switch time should take approximately several seconds and > along with time necessary for reconnecting chunkservers a new master > would be fully functional in about a minute (in both read and write modes). > === > change logs file from chunkserver? Could you tell me details for this > paragraph with a specific example. > > I'm really care about configuring master and backup failover for MooseFS. > > -- > ********************** > Regards, > Cuong Hoang Bui > ct...@ct... > bhc...@gm... > ********************** > > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |
From: Cuong H. B. <bhc...@gm...> - 2010-09-13 16:14:33
|
Hi, In the section "HOW TO PREPARE A FAIL PROOF SOLUTION WITH A REDUNDANT MASTER?", I'm not clear about configuration. Please tell me details. With 2 servers A, B, I install mfsmaster on both. - Configure A as master (CARP master), let mfsmaster running. - Let mfsmaster stop on B. Install metalogger on B, let it running. There are some cases as below. 1. Network connectivity on A broken, then CARP on B becomes master, but mfsmaster not running. How do I start it automatically in this case? 2. Network connectivity on A is okay but mfsmaster process is down. What should I do in this case? How do I monitor mfsmaster asynchronous so that I will shutdown CARP interface to switch to backup automatically? I dont want to use cron job to poll mfsmaster process. Please help me show some solutions for these cases. Besides, when I read this section, I'm not clear about this paragraph === You also need an extra script run continuously testing the CARP interface state which in case if this interface goes in a MASTER mode would get two or three newest "changelog" files from any chunkserver (just by using "scp"), would also start mfsmetarestore and then mfsmaster. The switch time should take approximately several seconds and along with time necessary for reconnecting chunkservers a new master would be fully functional in about a minute (in both read and write modes). === change logs file from chunkserver? Could you tell me details for this paragraph with a specific example. I'm really care about configuring master and backup failover for MooseFS. -- ********************** Regards, Cuong Hoang Bui ct...@ct... bhc...@gm... ********************** |
From: 欧阳晓华 <toa...@gm...> - 2010-09-13 15:13:01
|
Hi Files can not be restored, no matter it is from master server or from metalogger server, only the metadata.mfs.back.tmp can be restored It sounds unbelievable, but it really is. I don't what happened will cause this error. And fortunately, this file server is only for backup files, there seems no file lost. Than you for your suggest. 2010/9/13 Michał Borychowski <mic...@ge...> > Hi! > > > > The situtation you describe should not have taken place as overwriting > metadata.mfs.back with metadata.mfs.back.tmp should be an atomic operation > in the file system. So theoretically the file metadata.mfs.back should be a > proper metada file and metadata.mfs.back.tmp can contain partial data. > First, you should take metadata.mfs.back and later metadata.mfs.back.tmp. If > for some reason metadata.mfs.back is not accessible you should use > metadata.mfs.back from a backup or metalogger machine and lastly you use > metadata.mfs.back.tmp > > > > > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > > > *From:* 欧阳晓华 [mailto:toa...@gm...] > *Sent:* Sunday, September 12, 2010 7:01 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] master shutdown unexpected > > > > hello: > > my master server shutdown for lost power, when I start server again, > I use mfsmetarestore -a to restore mfsdata, > > it report that metadata.mfs.back can't be read, and I find > var/mfs/metadata.mfs.back exist, but small than metadata.mfs.back.tmp > > I copy metadata.mfs.back.tmp to metadata.mfs.back, mfsmetarestore runs OK. > > I want to know if this action correct, is there any data lost? Or what can > I do when master shutdown unexpected? > > Thanks! > |
From: Michał B. <mic...@ge...> - 2010-09-13 11:29:24
|
Hello again Your changelog_ml.0.mfs never got filled? Saving in metalogger takes place by using write buffer which is flushed by the operating system. This is much quicker. It also means that in case of metalogger failure it will be missing some recent changes. But in case of master failure, after shutting down the metalogger process, system would write all data waiting in the buffer and metadata file on metalogger machine would be complete. We've made automatic data flushing every second which won't affect performance but possilby this behaviour would be more clear to users. Kind regards Michał -----Original Message----- From: Bán Miklós [mailto:ba...@vo...] Sent: Monday, September 06, 2010 4:16 PM To: moo...@li... Subject: Re: [Moosefs-users] changelogs on the metalogger On Mon, 6 Sep 2010 13:42:30 +0200 Michał Borychowski <mic...@ge...> wrote: > Hi! > > Metaloggers should continuously receive the current changes from the > master server and write them into its own text change logs named > changelog_ml.0.mfs. > > How do you know that in your system they are save hourly? Don't they > increment with every change in the filesystem? Yes, exactly. There is no new lines on metalogger server's changelog_ml.0.mfs, while the master's changelog.0.mfs updating. Nevertheless the sessions_ml.mfs is updating continuously. My server version is 1.6.7 and it is running on Ubuntu Jaunty. Here is a short log part from a metalogger: Sep 6 14:02:00 fekete mfsmetalogger[30553]: sessions downloaded 2374B/0.000435s (5.457 MB/s) Sep 6 14:02:02 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/09/chunk_0000000000000009_00000001.mfs Sep 6 14:02:12 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0C/chunk_000000000000000C_00000001.mfs Sep 6 14:02:22 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0E/chunk_000000000000000E_00000001.mfs Sep 6 14:02:32 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/10/chunk_0000000000000010_00000001.mfs Sep 6 14:02:42 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0A/chunk_000000000000000A_00000001.mfs Sep 6 14:02:52 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0F/chunk_000000000000000F_00000001.mfs Sep 6 14:03:00 fekete mfsmetalogger[30553]: sessions downloaded 2374B/0.000570s (4.165 MB/s) > > Regarding your second question - yes, this is right. For now, > metalogger doesn't download the metadata file upon starting. We know > about this shortcoming and we'll fix the behaviour soon. > > Thanx. &Miklos ---------------------------------------------------------------------------- -- This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-09-13 11:23:05
|
Hi! The situtation you describe should not have taken place as overwriting metadata.mfs.back with metadata.mfs.back.tmp should be an atomic operation in the file system. So theoretically the file metadata.mfs.back should be a proper metada file and metadata.mfs.back.tmp can contain partial data. First, you should take metadata.mfs.back and later metadata.mfs.back.tmp. If for some reason metadata.mfs.back is not accessible you should use metadata.mfs.back from a backup or metalogger machine and lastly you use metadata.mfs.back.tmp Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: 欧阳晓华 [mailto:toa...@gm...] Sent: Sunday, September 12, 2010 7:01 AM To: moo...@li... Subject: [Moosefs-users] master shutdown unexpected hello: my master server shutdown for lost power, when I start server again, I use mfsmetarestore -a to restore mfsdata, it report that metadata.mfs.back can't be read, and I find var/mfs/metadata.mfs.back exist, but small than metadata.mfs.back.tmp I copy metadata.mfs.back.tmp to metadata.mfs.back, mfsmetarestore runs OK. I want to know if this action correct, is there any data lost? Or what can I do when master shutdown unexpected? Thanks! |
From: Michał B. <mic...@ge...> - 2010-09-13 11:17:37
|
For the moment there is no special method for finding such files. We will include it in future "mastertools". We use "mfsmetadump" and "grep" - so for now this is really "manual" search. Regards Michał -----Original Message----- From: Bán Miklós [mailto:ba...@vo...] Sent: Friday, September 10, 2010 10:56 PM To: moo...@li... Subject: [Moosefs-users] finding the files Hi, is there a way to find a file based on their chunk info? Today I found an error log on the cgiserv screen: "2010-09-10 21:19:58 on chunk: 372210" After I have found this line in the syslog of one of my chunkserver: mfschunkserver[15693]: read_block_from_chunk: file:/mnt/mfs_hd1/F2/chunk_000000000005ADF2_00000001.mfs - crc error Which is the affected file? After I greped the changelogs for "372210": changelog.8.mfs:3722100: 1284129925|AQUIRE(432537,30) changelog.8.mfs:3722101: 1284129925|WRITE(432537,0,1):440095 changelog.8.mfs:3722102: 1284129925|LENGTH(432537,16119) changelog.8.mfs:3722103: 1284129925|UNLOCK(440095) changelog.8.mfs:3722104: 1284129925|ATTR(432537,384,1011,100,1280486196,1209062162) changelog.8.mfs:3722105: 1284129925|ATTR(432537,420,1011,100,1280486196,1209062162) changelog.8.mfs:3722106: 1284129925|CREATE(428569,zzz.qd,f,384,1011,100,0):432538 changelog.8.mfs:3722107: 1284129925|AQUIRE(432538,30) changelog.8.mfs:3722108: 1284129925|WRITE(432538,0,1):440096 changelog.8.mfs:3722109: 1284129925|LENGTH(432538,17947) mfsfileinfo ./zzz.qd ./zzz.qd: chunk 0: 000000000005ADF2_00000001 / (id:372210 ver:1) no valid copies !!! I could replace that file from a backup, but I don't why happened this error and how can I find easily a file based on their chunk info. Is there any "reverse search" solution? Thanx, Miklos ---------------------------------------------------------------------------- -- Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing http://p.sf.net/sfu/novell-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-09-13 10:23:52
|
Hi! You can find file name and chunk number in the CGI interface or in logs Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Kristofer Pettijohn [mailto:kri...@cy...] Sent: Wednesday, September 08, 2010 11:44 PM To: moo...@li... Subject: [Moosefs-users] No valid copies Hello, I am seeing that I have one chunk with goal=2 but no valid copies. Is there a way for me to determine what chunk (and file that chunk belongs to) that is? |
From: Michał B. <mic...@ge...> - 2010-09-13 10:18:41
|
Hi! Unfortunately this can cause other problems. If after each save you close the file and later open it, you'll blow the metalogger. Maybe you would rather use: fflush(eptr->logfd). We are further investigating why the changelogs may not save on metalogger. Regards Michał -----Original Message----- From: Bán Miklós [mailto:ba...@vo...] Sent: Wednesday, September 08, 2010 10:33 PM To: moo...@li... Subject: Re: [Moosefs-users] changelogs on the metalogger Hi, Sorry, I was wrong. The version no is 1.6.17. I've patched the mfsmetalogger/masterconn.c file with the following lines: 140a141 > 187a189,192 > if (eptr->logfd) { > fclose(eptr->logfd); > eptr->logfd = NULL; > } which have solved that problem. So this is the bug fix. There was no fclose after opening the changelog_ml.0.mfs. Miklos On Wed, 8 Sep 2010 14:33:39 -0400 Travis Hein <tra...@tr...> wrote: > > On 2010-09-06, at 10:16 AM, Bán Miklós wrote: > > >> > >> Metaloggers should continuously receive the current changes from > >> the master server and write them into its own text change logs > >> named changelog_ml.0.mfs. > >> > >> How do you know that in your system they are save hourly? Don't > >> they increment with every change in the filesystem? > > > > Yes, exactly. There is no new lines on metalogger server's > > changelog_ml.0.mfs, while the master's changelog.0.mfs updating. > > Nevertheless the sessions_ml.mfs is updating continuously. > > My server version is 1.6.7 and it is running on Ubuntu Jaunty. > > > > Did you mean 1.6.17 ?,, or 1.5.7. (There was no 1.6.7 release?) > > I was concerned, if you were not running the latest server and > metalogger, perhaps this would be a bug that has already been fixed. > > Travis > > > > > ---------------------------------------------------------------------------- -- > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ---------------------------------------------------------------------------- -- This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-09-13 09:59:11
|
Hi! Yes, location awareness is on our roadmap. I cannot tell exactly when it would be implemented but probably it would be implemented quite soon :) Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Ioannis Aslanidis [mailto:ias...@fl...] Sent: Wednesday, September 08, 2010 8:17 PM To: Kristofer Pettijohn Cc: moo...@li... Subject: Re: [Moosefs-users] Grouping chunk servers Thank you for the answer. The only thing I need to know now is when to expect this, but at least I know it will be done some day. On Wed, Sep 8, 2010 at 7:44 PM, Kristofer Pettijohn <kri...@cy...> wrote: > According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: > > "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). > > ----- Original Message ----- > From: "Ioannis Aslanidis" <ias...@fl...> > To: moo...@li... > Sent: Wednesday, September 8, 2010 11:37:08 AM > Subject: [Moosefs-users] Grouping chunk servers > > Hello, > > I am testing out MooseFS for around 50 to 100 TeraBytes of data. > > I have been successful to set up the whole environment. It was pretty > quick and easy actually. I was able to replicate with goal=3 and it > worked really nicely. > > At this point, there is only one requirement that I was not able to > accomplish. I require to have 3 copies of a certain chunk, but my > storage machines are distributed in two points of presence. > > I require that each of the points of presence contains at least one > copy of the chunks. This is fine when you have 3 chunk servers, but it > won't work if you have 6 chunk servers. The scenario is the following: > > POP1: 4 chunk servers (need 2 replicas here) > POP2: 2 chunk servers (need 1 replica here) > > I need this because if the whole POP1 or the whole POP2 go down, I > need to still be able to access the contents. Writes are normally only > performed in POP1, so there are normally only reads in POP2. > > The situation is worse if I add 2 more chunk servers in POP1 and 1 > more chunk server in POP2. > > Is there a way to somehow tell MooseFS that the 4 chunk servers of > POP1 are in one group and that there should be at least 1 replica in > this group and that the 2 chunk servers of POP2 are in another group > and that there should be at least 1 replica in this group? > > Is there any way to accomplish this? > > Regards. > > -- > Ioannis Aslanidis > System and Network Administrator > Flumotion Services, S.A. > > sys...@fl... > > Office Phone: +34 93 508 63 59 > Mobile Phone: +34 672 20 45 75 > > ---------------------------------------------------------------------------- -- > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 ---------------------------------------------------------------------------- -- This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-09-13 09:55:37
|
From: Kristofer Pettijohn [mailto:kri...@cy...] Sent: Wednesday, September 08, 2010 5:00 PM To: moo...@li... Subject: [Moosefs-users] Chunk servers and disks per server How does MooseFS treat multiple disks on a chunk server? Does it try to spread the data out in some sort of balance? [MB] The system first tries to fill the disks which have more free space (counting in percents). If there are 2 chunk servers, and one has two disks, the other has one disk (so 2 chunk servers, 3 disks total), if there is a goal for a file set to 2, is it possible that the both chunks will be stored on different disks of one chunk server, or is MooseFS smart enough to know to keep them on separate servers? [MB] They always would be kept on separate servers. Also, are there any commercial support offerings for Moose? [MB] Yes, we can offer commercial support. Please write to me what you would need. [MB] If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 Thanks, Kris Pettijohn |
From: 欧阳晓华 <toa...@gm...> - 2010-09-12 05:01:23
|
hello: my master server shutdown for lost power, when I start server again, I use mfsmetarestore -a to restore mfsdata, it report that metadata.mfs.back can't be read, and I find var/mfs/metadata.mfs.back exist, but small than metadata.mfs.back.tmp I copy metadata.mfs.back.tmp to metadata.mfs.back, mfsmetarestore runs OK. I want to know if this action correct, is there any data lost? Or what can I do when master shutdown unexpected? Thanks! |
From: Bán M. <ba...@vo...> - 2010-09-10 20:56:27
|
Hi, is there a way to find a file based on their chunk info? Today I found an error log on the cgiserv screen: "2010-09-10 21:19:58 on chunk: 372210" After I have found this line in the syslog of one of my chunkserver: mfschunkserver[15693]: read_block_from_chunk: file:/mnt/mfs_hd1/F2/chunk_000000000005ADF2_00000001.mfs - crc error Which is the affected file? After I greped the changelogs for "372210": changelog.8.mfs:3722100: 1284129925|AQUIRE(432537,30) changelog.8.mfs:3722101: 1284129925|WRITE(432537,0,1):440095 changelog.8.mfs:3722102: 1284129925|LENGTH(432537,16119) changelog.8.mfs:3722103: 1284129925|UNLOCK(440095) changelog.8.mfs:3722104: 1284129925|ATTR(432537,384,1011,100,1280486196,1209062162) changelog.8.mfs:3722105: 1284129925|ATTR(432537,420,1011,100,1280486196,1209062162) changelog.8.mfs:3722106: 1284129925|CREATE(428569,zzz.qd,f,384,1011,100,0):432538 changelog.8.mfs:3722107: 1284129925|AQUIRE(432538,30) changelog.8.mfs:3722108: 1284129925|WRITE(432538,0,1):440096 changelog.8.mfs:3722109: 1284129925|LENGTH(432538,17947) mfsfileinfo ./zzz.qd ./zzz.qd: chunk 0: 000000000005ADF2_00000001 / (id:372210 ver:1) no valid copies !!! I could replace that file from a backup, but I don't why happened this error and how can I find easily a file based on their chunk info. Is there any "reverse search" solution? Thanx, Miklos |
From: Roast <zha...@gm...> - 2010-09-10 14:01:27
|
This bug has been fixed at the latest version. On Fri, Sep 10, 2010 at 2:29 PM, Lin Yang <id...@gm...> wrote: > when My File System got 8431937 chunks, the chunkserver can not connect to > the master. > > /var/log/messages on the chunkserver writes: > > Sep 9 16:24:50 be2 mfschunkserver[7052]: connecting ... > Sep 9 16:24:50 be2 mfschunkserver[7052]: connected to Master > > If I change the value in mfs-1.6.15/mfsmaster/matocsserv.c and add a "0" > behind the value of MaxPacketSize, > It works fine ! > > #define MaxPacketSize 500000000//加个0 > > hope helpful & sorry for my English . > > Regards. > -- > ning > > > > > > ------------------------------------------------------------------------------ > Automate Storage Tiering Simply > Optimize IT performance and efficiency through flexible, powerful, > automated storage tiering capabilities. View this brief to learn how > you can reduce costs and improve performance. > http://p.sf.net/sfu/dell-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- The time you enjoy wasting is not wasted time! |
From: Lin Y. <id...@gm...> - 2010-09-10 06:29:43
|
when My File System got 8431937 chunks, the chunkserver can not connect to the master. /var/log/messages on the chunkserver writes: Sep 9 16:24:50 be2 mfschunkserver[7052]: connecting ... Sep 9 16:24:50 be2 mfschunkserver[7052]: connected to Master If I change the value in mfs-1.6.15/mfsmaster/matocsserv.c and add a "0" behind the value of MaxPacketSize, It works fine ! #define MaxPacketSize 500000000//加个0 hope helpful & sorry for my English . Regards. -- ning |
From: Michał B. <mic...@ge...> - 2010-09-09 08:28:25
|
Generally speaking errors on disks just happen from time to time. You don't have to bother about them if they don't occur too often (not more than once a week). Regards Michał From: Fabien Germain [mailto:fab...@gm...] Sent: Thursday, September 09, 2010 1:46 AM To: moo...@li... Subject: Re: [Moosefs-users] what does the timestamp mean in "last error" column of disks status Hello, On Wed, Sep 8, 2010 at 10:02 AM, kuer ku <ku...@gm...> wrote: On MFS web interface, there is a "Disks" tab to show disks status in chunkservers. But, I found some timestamp on "last error" column, what does that mean ? It's the timestamp of the last detected error on the disk. It seems that the disk labelled with timestamp works. Where to find the reason why these disks marked with timestamp ?? You'll find the error message in the logs file on the corresponding chunkserver (/var/log/messages or /var/log/syslog, depending on your Linux favor). Fabien |
From: Fabien G. <fab...@gm...> - 2010-09-08 23:46:20
|
Hello, On Wed, Sep 8, 2010 at 10:02 AM, kuer ku <ku...@gm...> wrote: > On MFS web interface, there is a "Disks" tab to show disks status in > chunkservers. > > But, I found some timestamp on "last error" column, what does that mean ? > It's the timestamp of the last detected error on the disk. It seems that the disk labelled with timestamp works. > > Where to find the reason why these disks marked with timestamp ?? > You'll find the error message in the logs file on the corresponding chunkserver (/var/log/messages or /var/log/syslog, depending on your Linux favor). Fabien |
From: Kristofer P. <kri...@cy...> - 2010-09-08 21:43:43
|
Hello, I am seeing that I have one chunk with goal=2 but no valid copies. Is there a way for me to determine what chunk (and file that chunk belongs to) that is? |
From: Bán M. <ba...@vo...> - 2010-09-08 20:33:12
|
Hi, Sorry, I was wrong. The version no is 1.6.17. I've patched the mfsmetalogger/masterconn.c file with the following lines: 140a141 > 187a189,192 > if (eptr->logfd) { > fclose(eptr->logfd); > eptr->logfd = NULL; > } which have solved that problem. So this is the bug fix. There was no fclose after opening the changelog_ml.0.mfs. Miklos On Wed, 8 Sep 2010 14:33:39 -0400 Travis Hein <tra...@tr...> wrote: > > On 2010-09-06, at 10:16 AM, Bán Miklós wrote: > > >> > >> Metaloggers should continuously receive the current changes from > >> the master server and write them into its own text change logs > >> named changelog_ml.0.mfs. > >> > >> How do you know that in your system they are save hourly? Don't > >> they increment with every change in the filesystem? > > > > Yes, exactly. There is no new lines on metalogger server's > > changelog_ml.0.mfs, while the master's changelog.0.mfs updating. > > Nevertheless the sessions_ml.mfs is updating continuously. > > My server version is 1.6.7 and it is running on Ubuntu Jaunty. > > > > Did you mean 1.6.17 ?,, or 1.5.7. (There was no 1.6.7 release?) > > I was concerned, if you were not running the latest server and > metalogger, perhaps this would be a bug that has already been fixed. > > Travis > > > > > ------------------------------------------------------------------------------ > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Travis H. <tra...@tr...> - 2010-09-08 18:51:57
|
On 2010-09-06, at 10:16 AM, Bán Miklós wrote: >> >> Metaloggers should continuously receive the current changes from the >> master server and write them into its own text change logs named >> changelog_ml.0.mfs. >> >> How do you know that in your system they are save hourly? Don't they >> increment with every change in the filesystem? > > Yes, exactly. There is no new lines on metalogger server's > changelog_ml.0.mfs, while the master's changelog.0.mfs updating. > Nevertheless the sessions_ml.mfs is updating continuously. > My server version is 1.6.7 and it is running on Ubuntu Jaunty. > Did you mean 1.6.17 ?,, or 1.5.7. (There was no 1.6.7 release?) I was concerned, if you were not running the latest server and metalogger, perhaps this would be a bug that has already been fixed. Travis |
From: Travis H. <tra...@tr...> - 2010-09-08 18:46:57
|
On 2010-09-08, at 10:59 AM, Kristofer Pettijohn wrote: > How does MooseFS treat multiple disks on a chunk server? Does it try to spread the data out in some sort of balance? > If there are 2 chunk servers, and one has two disks, the other has one disk (so 2 chunk servers, 3 disks total), if there is a goal for a file set to 2, is it possible that the both chunks will be stored on different disks of one chunk server, or is MooseFS smart enough to know to keep them on separate servers? > > Thanks, > Kris Pettijohn > I believe all of the disks within a chunk server behave to the system as one copy. Within the chunk server the disks are used as the need arises. So in your example, having two chunk servers, when setting the goal for a file to 2, the files will be stored on both chunk servers, but not necessarily utilizing both disks on the one chunk server that has more than one disk in it. Another thing to consider, in this example, files will only be able to be stored at their desired goal (of 2 here), as long as all of the chunk servers (2 in this case) have room to store new files. If one chunk server would to become at capacity, while the other chunk server had more space free due to it's second disk, the files would be stored in the remaining space (due to the second disk) on this other chunk server, but would be done so at 'under goal' condition. The solution to achieve the desired goal for these files might then be to add another disk to the first chunk server, or set up an additional chunk server. Travis |
From: Ioannis A. <ias...@fl...> - 2010-09-08 18:17:36
|
Thank you for the answer. The only thing I need to know now is when to expect this, but at least I know it will be done some day. On Wed, Sep 8, 2010 at 7:44 PM, Kristofer Pettijohn <kri...@cy...> wrote: > According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: > > "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). > > ----- Original Message ----- > From: "Ioannis Aslanidis" <ias...@fl...> > To: moo...@li... > Sent: Wednesday, September 8, 2010 11:37:08 AM > Subject: [Moosefs-users] Grouping chunk servers > > Hello, > > I am testing out MooseFS for around 50 to 100 TeraBytes of data. > > I have been successful to set up the whole environment. It was pretty > quick and easy actually. I was able to replicate with goal=3 and it > worked really nicely. > > At this point, there is only one requirement that I was not able to > accomplish. I require to have 3 copies of a certain chunk, but my > storage machines are distributed in two points of presence. > > I require that each of the points of presence contains at least one > copy of the chunks. This is fine when you have 3 chunk servers, but it > won't work if you have 6 chunk servers. The scenario is the following: > > POP1: 4 chunk servers (need 2 replicas here) > POP2: 2 chunk servers (need 1 replica here) > > I need this because if the whole POP1 or the whole POP2 go down, I > need to still be able to access the contents. Writes are normally only > performed in POP1, so there are normally only reads in POP2. > > The situation is worse if I add 2 more chunk servers in POP1 and 1 > more chunk server in POP2. > > Is there a way to somehow tell MooseFS that the 4 chunk servers of > POP1 are in one group and that there should be at least 1 replica in > this group and that the 2 chunk servers of POP2 are in another group > and that there should be at least 1 replica in this group? > > Is there any way to accomplish this? > > Regards. > > -- > Ioannis Aslanidis > System and Network Administrator > Flumotion Services, S.A. > > sys...@fl... > > Office Phone: +34 93 508 63 59 > Mobile Phone: +34 672 20 45 75 > > ------------------------------------------------------------------------------ > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |