You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Kristofer P. <kri...@cy...> - 2010-09-08 17:45:04
|
According to the roadmap (http://www.moosefs.org/roadmap.html), this is slated for the future: "Location awareness" of chunkserver - optional file mapping IP_address->location_number. As a location we understand a rack in which the chunkserver is located. The system would then be able to optimize some operations (eg. prefer chunk copy which is located in the same rack). ----- Original Message ----- From: "Ioannis Aslanidis" <ias...@fl...> To: moo...@li... Sent: Wednesday, September 8, 2010 11:37:08 AM Subject: [Moosefs-users] Grouping chunk servers Hello, I am testing out MooseFS for around 50 to 100 TeraBytes of data. I have been successful to set up the whole environment. It was pretty quick and easy actually. I was able to replicate with goal=3 and it worked really nicely. At this point, there is only one requirement that I was not able to accomplish. I require to have 3 copies of a certain chunk, but my storage machines are distributed in two points of presence. I require that each of the points of presence contains at least one copy of the chunks. This is fine when you have 3 chunk servers, but it won't work if you have 6 chunk servers. The scenario is the following: POP1: 4 chunk servers (need 2 replicas here) POP2: 2 chunk servers (need 1 replica here) I need this because if the whole POP1 or the whole POP2 go down, I need to still be able to access the contents. Writes are normally only performed in POP1, so there are normally only reads in POP2. The situation is worse if I add 2 more chunk servers in POP1 and 1 more chunk server in POP2. Is there a way to somehow tell MooseFS that the 4 chunk servers of POP1 are in one group and that there should be at least 1 replica in this group and that the 2 chunk servers of POP2 are in another group and that there should be at least 1 replica in this group? Is there any way to accomplish this? Regards. -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 ------------------------------------------------------------------------------ This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ioannis A. <ias...@fl...> - 2010-09-08 16:37:35
|
Hello, I am testing out MooseFS for around 50 to 100 TeraBytes of data. I have been successful to set up the whole environment. It was pretty quick and easy actually. I was able to replicate with goal=3 and it worked really nicely. At this point, there is only one requirement that I was not able to accomplish. I require to have 3 copies of a certain chunk, but my storage machines are distributed in two points of presence. I require that each of the points of presence contains at least one copy of the chunks. This is fine when you have 3 chunk servers, but it won't work if you have 6 chunk servers. The scenario is the following: POP1: 4 chunk servers (need 2 replicas here) POP2: 2 chunk servers (need 1 replica here) I need this because if the whole POP1 or the whole POP2 go down, I need to still be able to access the contents. Writes are normally only performed in POP1, so there are normally only reads in POP2. The situation is worse if I add 2 more chunk servers in POP1 and 1 more chunk server in POP2. Is there a way to somehow tell MooseFS that the 4 chunk servers of POP1 are in one group and that there should be at least 1 replica in this group and that the 2 chunk servers of POP2 are in another group and that there should be at least 1 replica in this group? Is there any way to accomplish this? Regards. -- Ioannis Aslanidis System and Network Administrator Flumotion Services, S.A. sys...@fl... Office Phone: +34 93 508 63 59 Mobile Phone: +34 672 20 45 75 |
From: Kristofer P. <kri...@cy...> - 2010-09-08 14:59:47
|
How does MooseFS treat multiple disks on a chunk server? Does it try to spread the data out in some sort of balance? If there are 2 chunk servers, and one has two disks, the other has one disk (so 2 chunk servers, 3 disks total), if there is a goal for a file set to 2, is it possible that the both chunks will be stored on different disks of one chunk server, or is MooseFS smart enough to know to keep them on separate servers? Also, are there any commercial support offerings for Moose? Thanks, Kris Pettijohn |
From: Bán M. <ba...@vo...> - 2010-09-07 09:53:15
|
On Tue, 7 Sep 2010 11:44:46 +0200 Michał Borychowski <mic...@ge...> wrote: > Please, send these two scripts to the list so that we can have a look > at them > > I start these scripts from the init.d on the masterserver: #!/usr/bin/perl use strict; use IO::Socket; use Net::hostent; my $mfsserver = '/usr/sbin/mfsmaster'; my $port = 7990; my $data_path = "/var/lib/mfs"; my ($server, $client, $hostinfo, $n, $foo, $buf); $server = IO::Socket::INET->new( Proto => 'tcp', LocalPort => $port, Listen => SOMAXCONN, Reuse => 1); die "can't setup server" unless $server; print "[Server $0 accepting clients]\n"; system("$mfsserver start"); print "[mfsmaster server has been started]\n"; while ($client = $server->accept()) { $client->autoflush(1); $hostinfo = gethostbyaddr($client->peeraddr); while ( <$client>) { next unless /\S/; if (/^close$/i) { last; } elsif (/^(meta:)/) { $n = $'; chomp($n); $foo = `tail -1 $data_path/changelog.0.mfs`; chomp($foo); if ($n ne $foo) { system("$mfsserver restart"); open (FH,"<$data_path/metadata.mfs.back") or die $!; binmode(FH); while (sysread(FH, $buf, 1024)) { syswrite($client, $buf, length($buf)); } close(FH) or die $!; } } else { close $client; } } close $client; } on the metalogger server: #!/usr/bin/perl use strict; use IO::Socket; my $mfslogger = '/usr/sbin/mfsmetalogger'; my $data_path = "/var/lib/mfs"; my $host = "mfsmaster"; my $port = 7990; my ($kidpid, $handle, $buf, $c, $last_line); $last_line = `tail -1 $data_path/changelog_ml.0.mfs`; $handle = IO::Socket::INET->new(Proto => "tcp", PeerAddr => $host, PeerPort => $port) or die "can't connect to port $port on $host: $!"; $handle->autoflush(1); die "can't fork: $!" unless defined($kidpid = fork()); if ($kidpid) { open (FH,">$data_path/metadata_ml.mfs.back") or die $!;; binmode(FH); $c = 0; while (sysread($handle, $buf, 1024)) { syswrite(FH, $buf, length($buf)); $c = $c+1; } close(FH); if ($c) { unlink glob("$data_path/changelog_ml*"); system("touch $data_path/changelog_ml.0.mfs"); exec("$mfslogger $ARGV[0]"); } kill("TERM", $kidpid); } else { print $handle "meta:$last_line\n"; print $handle "close\n"; } |
From: Bán M. <ba...@vo...> - 2010-09-07 09:26:31
|
On Mon, 6 Sep 2010 16:17:08 +0200 Bán Miklós <ba...@vo...> wrote: > Second problem: > If I stop, wait and start the metalogger server, the changelog will be > inconsistent. The changes from the master server, while the metalogger > was being stopped, not arriving after I turn it back on. > Is there any recommendation how can I manage the metalogger servers > to keep coherent change logs after failure situation? Hi, I wrote two initiation script. One start the mfsmaster and the other the mfsmetalogger. It is a tcp based client server socket pair. If I restart a metalogger server it is sending the last changelog line to the masterserver. If that line differ from the masters's last changelog line the masterserver restarting and send back the new metadata.mfs to metalogger server. The metalogger server after receiving a new metadata file clear the changelogs and restarting or starting. It works and I think it is not a wrong way, but there is a problem: If there is a file operations while I restart the masterserver, what happening? Is there a lock mechanism on the filesystem while the mfsmaster is restarting? more.. I think there is no this client-server process to manage this kind of metadata updating, because it is possible to implement into the main code. Opinion? It is too difficult for me. Sorry. Would it be interesting to send these two script to the maillist? &Miklos |
From: Bán M. <ba...@vo...> - 2010-09-06 14:17:21
|
Hi, I've found something: in the masterconn.c void masterconn_metachanges_log ... the fprintf just write the changelogs when I stop or restart the process 178 if (eptr->logfd==NULL) { 179 eptr->logfd = fopen("changelog_ml.0.mfs","a"); 180 } 181 182 data++; 183 version = get64bit(&data); 184 if (eptr->logfd) { 185 fprintf(eptr->logfd,"%"PRIu64": %s\n",version,data); 186 } else { 187 syslog(LOG_NOTICE,"lost MFS change %"PRIu64": %s",version,data); 188 } I've append this lines: if (eptr->logfd) { fclose(eptr->logfd); eptr->logfd = NULL; } It seems to work, but I'm not sure is it a good way or not, I did not read the source code thoroughly. Second problem: If I stop, wait and start the metalogger server, the changelog will be inconsistent. The changes from the master server, while the metalogger was being stopped, not arriving after I turn it back on. Is there any recommendation how can I manage the metalogger servers to keep coherent change logs after failure situation? Miklos On Mon, 6 Sep 2010 13:42:30 +0200 Michał Borychowski <mic...@ge...> wrote: > Hi! > > Metaloggers should continuously receive the current changes from the > master server and write them into its own text change logs named > changelog_ml.0.mfs. > > How do you know that in your system they are save hourly? Don't they > increment with every change in the filesystem? > > Regarding your second question - yes, this is right. For now, > metalogger doesn't download the metadata file upon starting. We know > about this shortcoming and we'll fix the behaviour soon. > > |
From: Bán M. <ba...@vo...> - 2010-09-06 14:16:32
|
On Mon, 6 Sep 2010 13:42:30 +0200 Michał Borychowski <mic...@ge...> wrote: > Hi! > > Metaloggers should continuously receive the current changes from the > master server and write them into its own text change logs named > changelog_ml.0.mfs. > > How do you know that in your system they are save hourly? Don't they > increment with every change in the filesystem? Yes, exactly. There is no new lines on metalogger server's changelog_ml.0.mfs, while the master's changelog.0.mfs updating. Nevertheless the sessions_ml.mfs is updating continuously. My server version is 1.6.7 and it is running on Ubuntu Jaunty. Here is a short log part from a metalogger: Sep 6 14:02:00 fekete mfsmetalogger[30553]: sessions downloaded 2374B/0.000435s (5.457 MB/s) Sep 6 14:02:02 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/09/chunk_0000000000000009_00000001.mfs Sep 6 14:02:12 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0C/chunk_000000000000000C_00000001.mfs Sep 6 14:02:22 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0E/chunk_000000000000000E_00000001.mfs Sep 6 14:02:32 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/10/chunk_0000000000000010_00000001.mfs Sep 6 14:02:42 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0A/chunk_000000000000000A_00000001.mfs Sep 6 14:02:52 fekete mfschunkserver[30562]: testing chunk: /mnt/mfs_hd1/0F/chunk_000000000000000F_00000001.mfs Sep 6 14:03:00 fekete mfsmetalogger[30553]: sessions downloaded 2374B/0.000570s (4.165 MB/s) > > Regarding your second question - yes, this is right. For now, > metalogger doesn't download the metadata file upon starting. We know > about this shortcoming and we'll fix the behaviour soon. > > Thanx. &Miklos |
From: Michał B. <mic...@ge...> - 2010-09-06 11:42:58
|
Hi! Metaloggers should continuously receive the current changes from the master server and write them into its own text change logs named changelog_ml.0.mfs. How do you know that in your system they are save hourly? Don't they increment with every change in the filesystem? Regarding your second question - yes, this is right. For now, metalogger doesn't download the metadata file upon starting. We know about this shortcoming and we'll fix the behaviour soon. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Bán Miklós [mailto:ba...@vo...] Sent: Sunday, September 05, 2010 7:06 PM To: moo...@li... Subject: [Moosefs-users] changelogs on the metalogger Hi, recently I've installed an mfs cluster with several chunk servers, one master and two metaloggers. The metaloggers download the changelog files hourly. Is it normal? Can I set them to download the recent changelogs from the master more frequently or I have to write own scripts to keep up-to date the metalogger servers? my second question, why the metalogger do not download the metadata.mfs.back immediately after it has been started? On my metalogger servers the first metadata files appeared after a day. If is this the normal way, I can't agree with this because after a big failure on the first day I can't restore the master server from the metalogger. Thanks, Miklos ---------------------------------------------------------------------------- -- This SF.net Dev2Dev email is sponsored by: Show off your parallel programming skills. Enter the Intel(R) Threading Challenge 2010. http://p.sf.net/sfu/intel-thread-sfd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Bán M. <ba...@vo...> - 2010-09-05 17:25:35
|
Hi, recently I've installed an mfs cluster with several chunk servers, one master and two metaloggers. The metaloggers download the changelog files hourly. Is it normal? Can I set them to download the recent changelogs from the master more frequently or I have to write own scripts to keep up-to date the metalogger servers? my second question, why the metalogger do not download the metadata.mfs.back immediately after it has been started? On my metalogger servers the first metadata files appeared after a day. If is this the normal way, I can't agree with this because after a big failure on the first day I can't restore the master server from the metalogger. Thanks, Miklos |
From: Michał B. <mic...@ge...> - 2010-09-02 06:24:57
|
Hi! Below are the answers to your questions. From: Reed Hong [mailto:fis...@gm...] Sent: Thursday, August 26, 2010 4:39 AM To: moo...@li... Subject: [Moosefs-users] I want to know more detail about w/r operation Hi: I am very concerned with some problems about Write/Read operations: 1. what is write operation waiting for when it return? [MB] The command "write" itself doesn't wait for anything. Only command "fsync" or "close" waits for confirmation of all writes. If command "fsync" or "close" finishes with success it means that all writes, on all chunkservers also finished with success. 2. goal = 3, client <=> cs1 <=> cs2 <=> cs3 , if writing to cs2 or cs3 failed, how to deal with that? [MB] Mfsmount takes care of this. It repeats write operations by a given number of times (default: 30). If after these repetitions the errors still exists, then command "write", "fsync" or "close" returns EIO (input/output error). 3. What consistency level: Strong Consistency or Weak Consistency or Eventually Consistency (see <http://en.wikipedia.org/wiki/Eventual_consistency> en.wikipedia.org/wiki/Eventual_consistency ) [MB] A successful write (where fsync/close has finished with success) guarantees data consistency. If a write operation due to some reasons has not been finished (it returned EIO or client's machine has been restarted during saving) then data of a file being written would not be consistent. 4. How to ensue data consistency? [MB] For writing operations which finished with success there is guarantee of consistency. For other files there is no consistency guarantee. You need to delete such files and write them again or make a successful copy. Probably in the future will prepare a module for testing data consistency. The question is what to do if it finds files with not consistent copies. Probably a tool like "mfsfilerepair" should ask the user which copy to keep. If you obey to the rules given here: http://www.moosefs.org/moosefs-faq.html#wriiten you should have your data consistent. We hope these information give you more insight in read/write operations. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 I'v almost read every page in moosefs.org and every mail in mail-lists. but little information about w/r operation detail. I find some information from mail list " With goal=3 data transmission looks like this: client <=> cs1 <=> cs2 <=> cs3 client waiting write operation end util : cs1 finish writing, and cs2 finish writing, and cs3 finish writing. In this case, when client finish writing, MFS have 3 copies of data. Your point B is closer to the real writing process. But the client doesn't wait with sending data. It sends new data before it receives confirmation of writing previous data. Only removing from write queue takes place after writing confirmation. " I also read the source with the help of SourceInsight, trace the code from mfs_write() ---> write_data(), in write_data(), I see source if (status==0) { if (offset+size>id->maxfleng) { // move fleng id->maxfleng = offset+size; } id->writewaiting++; while (id->flushwaiting>0) { pthread_cond_wait(&(id->writecond),&glock); } id->writewaiting--; } pthread_mutex_unlock(&glock); if (status!=0) { return status; } then call write_block() to send write operation to jqueue, thread: write_worker() will send the real data. Does write operation will wait on pthread_cond_wait() above?? In mfs_read() function, I find many functions about write, such as write_data_flush(), write_data_end, write_data_flush_inode. It make me confused. Would please provide more documents about Write/Read operation, thanks a lot! -- --------------------------------------------------------------- by fishwarter |
From: Anh K. H. <ky...@vi...> - 2010-08-31 05:10:06
|
On Fri, 20 Aug 2010 12:58:40 +0200 Michał Borychowski <mic...@ge...> wrote: > You can easily use MooseFS to store any kind of content, including > log files. Performance should not be affected. But it depends on > the logging mechanism, if for writing every line the file is > continuously opened, data appended and the file closed, for sure it > is not perfect way to save logs. > > And you have to remember about one thing. Different clients cannot > write at the same moment to the same file located in MooseFS. So if > you have several httpd servers, let them save the logs under > different filenames (eg. access-192.168.0.1.log, > access-192.168.0.2.log, etc.) I share with you a small benchmark result. I set up a default apache server, then run ab -n 150000 -c <C> http://localhost/ # ab is provided by Apache to get information about web serving. The '<C>' is the concurrency level (number of requests will be sent at the same time.) I compared the output of data in two cases (A) * Default apache setup * logs file will be written on local disk (B) * Default apache setup on the same server as (A), * logs file will be written on a MFS disk (provided by another server.) Here's the output (A) http://gx.viettug.org/zen/kyanh/jobs/log_on_root.png.html (B) http://gx.viettug.org/zen/kyanh/jobs/log_on_mfs.png.html As you can see, when using MFS for logs, the number of requests per second decreases significantly. This number relies on the red curve of the graphs: the maximum value in (B) is 4500, which is lesser than the lowest value in (A). Conclusion: MFS isn't suitable for high load logs service (esp. when WRITING is important.) I don't know about the READ performance. Should make some other tests. Regards, > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Thursday, August 12, 2010 12:21 PM > To: moo...@li... > Subject: [Moosefs-users] Moosefs for logs > > Hi, > > I am a moosefs newbie. I am using EC2 instances and I intended to > build a moosefs system to share EBS disks between instances. > > My question is that: can I use moosefs for logs? My applications > (web server, applications) need to write to logs files, but I don't > know if there's any performance problem when logs are written to > moosefs' disk. > > Thank you for your helps, > > Regards, > -- Anh Ky Huynh |
From: Ricardo J. B. <ric...@da...> - 2010-08-30 18:36:54
|
El Jue 26 Agosto 2010, Ud. escribió: > On Wed, 25 Aug 2010 13:20:56 -0300 > > "Ricardo J. Barberis" <ric...@da...> wrote: > > [snip] > > > > Really strange, are you sure you didn't write the file > > to /mnt/mfs_m0 directly instead of /tmp/mfs_c0 ? > > > > What are /mnt/mfs_chunk_db1_0 and /mnt/mfs_chunk_db1_1, are those > > loop files? (hinted by your previous email) > > Feel free to check my full logs: > http://metakyanh.sarovar.org/moose.report1/report1.html I checked that webpage, very informative :) I'm not really familiar with bonnie but I guess it creates and deletes lots of files to test disk throughput, etc, right? In that case, your're probably filling MFS's trash can, check here: http://www.moosefs.org/moosefs-faq.html#delete And also the section "Setting quarantine time for trash bin" here: http://www.moosefs.org/reference-guide.html You may mount the trash can and delete the files in there, then set trashtime to 0 with 'mfssettrashtime -r 0 /tmp/mfs_c0' and check again the resulting behavior Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! ------------------------------------------ Nota de confidencialidad: Este mensaje y los archivos adjuntos al mismo son confidenciales, de uso exclusivo para el destinatario del mismo. La divulgación y/o uso del mismo sin autorización por parte de Dattatec.com queda prohibida. Dattatec.com no se hace responsable del mensaje por la falsificación y/o alteración del mismo. De no ser Ud. el destinatario del mismo y lo ha recibido por error, por favor notifique al remitente y elimínelo de su sistema. Confidentiality Note: This message and any attachments (the message) are confidential and intended solely for the addressees. Any unauthorised use or dissemination is prohibited by Dattatec.com. Dattatec.com shall not be liable for the message if altered or falsified. If you are not the intended addressee of this message, please cancel it immediately and inform the sender. Nota de Confidencialidade: Esta mensagem e seus eventuais anexos podem conter dados confidenciais ou privilegiados. Se você os recebeu por engano ou não é um dos destinatários aos quais ela foi endereçada, por favor destrua-a e a todos os seus eventuais anexos ou copias realizadas, imediatamente. É proibida a retenção, distribuição, divulgação ou utilização de quaisquer informações aqui contidas. Por favor, informe-nos sobre o recebimento indevido desta mensagem, retornando-a para o autor. |
From: Michał B. <mic...@ge...> - 2010-08-27 11:01:54
|
I meant "Accept filters" are supported by Linuces with new kernel. Chunkserver disconnects disks if there are in short time too many errors. Please check the disks Regards Michał -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Friday, August 27, 2010 12:57 PM To: Michał Borychowski Subject: Re: [Moosefs-users] Strange chunk servers status on mfscgi web page You should not bother with this message. "Accept filters" are supported by FreeBSD and Linuces with new kernel. OK. By the way I use MooseFS on FreeBSD 8.1. Has it a new kernel as you mean? Can you show what you have in Disks tab? Are there any disks with "damaged" status? Yes! You are right! There are 2 lines 192.168.0.20:9422:/moosefs_store/ 83262 2010-08-27 13:53 damaged 192.168.0.47:9422:/moosefs_store/ 16628 2010-08-27 18:34 damaged The IPs are exactly the same as "empty" chunk servers on Servers tab. -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Friday, August 27, 2010 10:18 AM To: Michal Borychowski Subject: Re: [Moosefs-users] Strange chunk servers status on mfscgi web page Hi! No :--( If there are problems with permissions - chunkserver does not start. All entries in mfshdd.cfg are correct^ as i wrote if I restart "empty" chunk server daemon - - status becoms normal for a while. I think the root of problem is in linked to error I found en in log: mfschunkserver[6025]: csserv: can't set accept filter: No such file or directory What does this error message mean? ====================================================== Hi! Probably chunkserver cannot determine size of the disk. Maybe there are some wrong entries in mfshdd.cfg or there are problems with permissions. Kind regards Michal Borychowski -----Original Message----- Sent: Thursday, August 26, 2010 2:59 PM To: moo...@li... Subject: [Moosefs-users] Strange chunk servers status on mfscgi web page Hi all! I have a strange view on mfscgi web page (please see attached screenshot). As you can see some chunk servers status is "empty" (chunks/used/total are 0, "%used" is ampty) and in the same time chunk servers are running and no errors in log. If I restart "empty" chunk server daemon - status becoms normal. Is it normal or it shows a wrong situation? wbr Alexander Akhobadze |
From: Michał B. <mic...@ge...> - 2010-08-27 10:40:27
|
You should not bother with this message. "Accept filters" are supported by FreeBSD and Linuces with new kernel. Can you show what you have in Disks tab? Are there any disks with "damaged" status? Regards Michal -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Friday, August 27, 2010 10:18 AM To: Michal Borychowski Subject: Re: [Moosefs-users] Strange chunk servers status on mfscgi web page Hi! No :--( If there are problems with permissions - chunkserver does not start. All entries in mfshdd.cfg are correct^ as i wrote if I restart "empty" chunk server daemon - - status becoms normal for a while. I think the root of problem is in linked to error I found en in log: mfschunkserver[6025]: csserv: can't set accept filter: No such file or directory What does this error message mean? ====================================================== Hi! Probably chunkserver cannot determine size of the disk. Maybe there are some wrong entries in mfshdd.cfg or there are problems with permissions. Kind regards Michal Borychowski -----Original Message----- Sent: Thursday, August 26, 2010 2:59 PM To: moo...@li... Subject: [Moosefs-users] Strange chunk servers status on mfscgi web page Hi all! I have a strange view on mfscgi web page (please see attached screenshot). As you can see some chunk servers status is "empty" (chunks/used/total are 0, "%used" is ampty) and in the same time chunk servers are running and no errors in log. If I restart "empty" chunk server daemon - status becoms normal. Is it normal or it shows a wrong situation? wbr Alexander Akhobadze |
From: Michał B. <mic...@ge...> - 2010-08-27 08:03:16
|
Hi! Probably chunkserver cannot determine size of the disk. Maybe there are some wrong entries in mfshdd.cfg or there are problems with permissions. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Ахобадзе Александр Гурамович [mailto:akh...@ri...] Sent: Thursday, August 26, 2010 2:59 PM To: moo...@li... Subject: [Moosefs-users] Strange chunk servers status on mfscgi web page Hi all! I have a strange view on mfscgi web page (please see attached screenshot). As you can see some chunk servers status is "empty" (chunks/used/total are 0, "%used" is ampty) and in the same time chunk servers are running and no errors in log. If I restart "empty" chunk server daemon - status becoms normal. Is it normal or it shows a wrong situation? wbr Alexander Akhobadze |
From: Alexander A. <akh...@ri...> - 2010-08-26 18:17:05
|
...sorry for a bit wrong message. I found en error in log: mfschunkserver[6025]: csserv: can't set accept filter: No such file or directory What does in mean? ====================================================== Hi all! I have a strange view on mfscgi web page (please see attached screenshot). As you can see some chunk servers status is "empty" (chunks/used/total are 0, "%used" is ampty) and in the same time chunk servers are running and no errors in log. If I restart "empty" chunk server daemon - status becoms normal. Is it normal or it shows a wrong situation? wbr Alexander Akhobadze |
From: Alexander A. <akh...@ri...> - 2010-08-26 13:42:15
|
Yes, yes! I agree with Anh Ky Huynh. Write speed to a MFS disk is very low. We use Samba ower MFS and see that reading from samba share has a normal speed and writing as opposed to reading is very slow. In connection with it I have a question. When MFS client writes some file with goal >1 how it is performed ? Does client wait to complete write operation of all file copyes? Or client writes to just one chunk server and than to achieve a goal we rely on chunk replication? wbr Alexander Akhobadze ====================================================== Вы писали 26 августа 2010 г., 17:03:31: ====================================================== How exactly does AWS EC2 work? Is it a web service? How do you connect to it? What is your Internet bandwidth? Speed 136MB/s is practically not achievable at 1Gbit network. We suggest you generate a random file (from /dev/urandom) and you try to upload it by using normal "cp" or "dd" with giving at if= this file as the parameter. You can also repeat this test with four parallel writes: dd if=/dev/zero of=/tmp/mfs_c0/test_1 bs=1024 count=524288 & dd if=/dev/zero of=/tmp/mfs_c0/test_2 bs=1024 count=524288 & dd if=/dev/zero of=/tmp/mfs_c0/test_3 bs=1024 count=524288 & dd if=/dev/zero of=/tmp/mfs_c0/test_4 bs=1024 count=524288 & You can also check it with a little bigger block eg.: dd if=/dev/zero of=/tmp/mfs_c0/test_1 bs=1048576 count=512. Kind regards Michał -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Wednesday, August 25, 2010 4:59 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Moosefs for logs On Fri, 20 Aug 2010 12:58:40 +0200 Michał Borychowski <mic...@ge...> wrote: > You can easily use MooseFS to store any kind of content, including > log files. Performance should not be affected. But it depends on > the logging mechanism, if for writing every line the file is > continuously opened, data appended and the file closed, for sure it > is not perfect way to save logs. Here's my simple test results on AWS EC2 environment: A 'dd' command tries to write about 540MB to a MFS disk, the speed is very low (28MB/s). On the same server with native disk (instant disk of any AWS EC2 instance), the same command yields the speed around 140MB/s. I don't know if I can deploy a database server on any MFS disk ... /-------------------------------------------------------------- $ dd if=/dev/zero of=/tmp/mfs_c0/test bs=1024 count=524288 524288+0 records in 524288+0 records out 536870912 bytes (537 MB) copied, 18.9801 s, 28.3 MB/s $ dd if=/dev/zero of=/tmp/test bs=1024 count=524288 524288+0 records in 524288+0 records out 536870912 bytes (537 MB) copied, 3.95007 s, 136 MB/s \-------------------------------------------------------------- > And you have to remember about one thing. Different clients cannot > write at the same moment to the same file located in MooseFS. So if > you have several httpd servers, let them save the logs under > different filenames (eg. access-192.168.0.1.log, > access-192.168.0.2.log, etc.) > > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Thursday, August 12, 2010 12:21 PM > To: moo...@li... > Subject: [Moosefs-users] Moosefs for logs > > Hi, > > I am a moosefs newbie. I am using EC2 instances and I intended to > build a moosefs system to share EBS disks between instances. > > My question is that: can I use moosefs for logs? My applications > (web server, applications) need to write to logs files, but I don't > know if there's any performance problem when logs are written to > moosefs' disk. > > Thank you for your helps, > > Regards, |
From: Michał B. <mic...@ge...> - 2010-08-26 13:03:52
|
How exactly does AWS EC2 work? Is it a web service? How do you connect to it? What is your Internet bandwidth? Speed 136MB/s is practically not achievable at 1Gbit network. We suggest you generate a random file (from /dev/urandom) and you try to upload it by using normal "cp" or "dd" with giving at if= this file as the parameter. You can also repeat this test with four parallel writes: dd if=/dev/zero of=/tmp/mfs_c0/test_1 bs=1024 count=524288 & dd if=/dev/zero of=/tmp/mfs_c0/test_2 bs=1024 count=524288 & dd if=/dev/zero of=/tmp/mfs_c0/test_3 bs=1024 count=524288 & dd if=/dev/zero of=/tmp/mfs_c0/test_4 bs=1024 count=524288 & You can also check it with a little bigger block eg.: dd if=/dev/zero of=/tmp/mfs_c0/test_1 bs=1048576 count=512. Kind regards Michał -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Wednesday, August 25, 2010 4:59 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] Moosefs for logs On Fri, 20 Aug 2010 12:58:40 +0200 Michał Borychowski <mic...@ge...> wrote: > You can easily use MooseFS to store any kind of content, including > log files. Performance should not be affected. But it depends on > the logging mechanism, if for writing every line the file is > continuously opened, data appended and the file closed, for sure it > is not perfect way to save logs. Here's my simple test results on AWS EC2 environment: A 'dd' command tries to write about 540MB to a MFS disk, the speed is very low (28MB/s). On the same server with native disk (instant disk of any AWS EC2 instance), the same command yields the speed around 140MB/s. I don't know if I can deploy a database server on any MFS disk ... :( /-------------------------------------------------------------- $ dd if=/dev/zero of=/tmp/mfs_c0/test bs=1024 count=524288 524288+0 records in 524288+0 records out 536870912 bytes (537 MB) copied, 18.9801 s, 28.3 MB/s $ dd if=/dev/zero of=/tmp/test bs=1024 count=524288 524288+0 records in 524288+0 records out 536870912 bytes (537 MB) copied, 3.95007 s, 136 MB/s \-------------------------------------------------------------- > And you have to remember about one thing. Different clients cannot > write at the same moment to the same file located in MooseFS. So if > you have several httpd servers, let them save the logs under > different filenames (eg. access-192.168.0.1.log, > access-192.168.0.2.log, etc.) > > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Thursday, August 12, 2010 12:21 PM > To: moo...@li... > Subject: [Moosefs-users] Moosefs for logs > > Hi, > > I am a moosefs newbie. I am using EC2 instances and I intended to > build a moosefs system to share EBS disks between instances. > > My question is that: can I use moosefs for logs? My applications > (web server, applications) need to write to logs files, but I don't > know if there's any performance problem when logs are written to > moosefs' disk. > > Thank you for your helps, > > Regards, > -- Anh Ky Huynh ------------------------------------------------------------------------------ Sell apps to millions through the Intel(R) Atom(Tm) Developer Program Be part of this innovative community and reach millions of netbook users worldwide. Take advantage of special opportunities to increase revenue and speed time-to-market. Join now, and jumpstart your future. http://p.sf.net/sfu/intel-atom-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Ахобадзе А. Г. <akh...@ri...> - 2010-08-26 12:59:38
|
Hi all! I have a strange view on mfscgi web page (please see attached screenshot). As you can see some chunk servers status is "empty" (chunks/used/total are 0, "%used" is ampty) and in the same time chunk servers are running and no errors in log. If I restart "empty" chunk server daemon - status becoms normal. Is it normal or it shows a wrong situation? wbr Alexander Akhobadze |
From: Michał B. <mic...@ge...> - 2010-08-26 11:19:53
|
Hi! Probably the files are still opened. You cannot delete such files "manually" but you can make them deleted. Just restart mfsmount processes which are considered to keep these files. Or even without restarting mfsmounts you can restart PostGreSQL processes which may have these files opened. Or if power failure was also on the machines with running PostGreSQL the system would automatically delete these files after some hours. If you need any further assistance please ask. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Aurélien Richaud [mailto:aur...@ne...] Sent: Thursday, August 26, 2010 9:38 AM To: moo...@li... Subject: [Moosefs-users] Problem with missing chunk Hi, I'm using MooseFS at home for a short time, and I had a problem. Due to a power failure, I have now 2 missing chunk in reserved files. These files are not open anymore (they are temporary files from PostGreSQL), but it looks like the system won't delete theim. I'm running MooseFS 1.6.17 on two gentoo boxes (first is master + chunk, second is metalogger + chunk). Can you help me purge these reserved files ? I didn't find how to do this. Anyway, you've done a very nice work on MooseFS. I posted a requested of feature about getfacl and setfacl. Regards, -- @+ Vermi ---------------------------------------------------------------------------- -- Sell apps to millions through the Intel(R) Atom(Tm) Developer Program Be part of this innovative community and reach millions of netbook users worldwide. Take advantage of special opportunities to increase revenue and speed time-to-market. Join now, and jumpstart your future. http://p.sf.net/sfu/intel-atom-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Anh K. H. <ky...@vi...> - 2010-08-26 09:44:39
|
On Wed, 25 Aug 2010 13:20:56 -0300 "Ricardo J. Barberis" <ric...@da...> wrote: > [snip] > Really strange, are you sure you didn't write the file > to /mnt/mfs_m0 directly instead of /tmp/mfs_c0 ? > > What are /mnt/mfs_chunk_db1_0 and /mnt/mfs_chunk_db1_1, are those > loop files? (hinted by your previous email) Feel free to check my full logs: http://metakyanh.sarovar.org/moose.report1/report1.html -- Anh Ky Huynh |
From: Aurélien R. <aur...@ne...> - 2010-08-26 07:38:00
|
Hi, I'm using MooseFS at home for a short time, and I had a problem. Due to a power failure, I have now 2 missing chunk in reserved files. These files are not open anymore (they are temporary files from PostGreSQL), but it looks like the system won't delete theim. I'm running MooseFS 1.6.17 on two gentoo boxes (first is master + chunk, second is metalogger + chunk). Can you help me purge these reserved files ? I didn't find how to do this. Anyway, you've done a very nice work on MooseFS. I posted a requested of feature about getfacl and setfacl. Regards, -- @+ Vermi |
From: Reed H. <fis...@gm...> - 2010-08-26 02:38:57
|
Hi: I am very concerned with some problems about Write/Read operations: 1. what is write operation waiting for when it return? 2. goal = 3, client <=> cs1 <=> cs2 <=> cs3 , if writing to cs2 or cs3 failed, how to deal with that? 3. What consistency level: Strong Consistency or Weak Consistency or Eventually Consistency (see en.wikipedia.org/wiki/Eventual_consistency ) 4. How to ensue data consistency? I'v almost read every page in moosefs.org and every mail in mail-lists. but little information about w/r operation detail. I find some information from mail list " With goal=3 data transmission looks like this: client <=> cs1 <=> cs2 <=> cs3 client waiting write operation end util : cs1 finish writing, and cs2 finish writing, and cs3 finish writing. In this case, when client finish writing, MFS have 3 copies of data. Your point B is closer to the real writing process. But the client doesn’t wait with sending data. It sends new data before it receives confirmation of writing previous data. Only removing from write queue takes place after writing confirmation. " I also read the source with the help of SourceInsight, trace the code from mfs_write() ---> write_data(), in write_data(), I see source *if (status==0) {* * **if (offset+size>id->maxfleng) {** **// move fleng* * **id->maxfleng = offset+size;* * **}* * **id->writewaiting++;* * **while (id->flushwaiting>0) {* * **pthread_cond_wait(&(id->writecond),&glock);* * **}* * **id->writewaiting--;* * **}* * **pthread_mutex_unlock(&glock);* * **if (status!=0) {* * **return status;* * **}* then call write_block() to send write operation to jqueue, thread: write_worker() will send the real data. Does write operation will wait on *pthread_cond_wait() above?? * In mfs_read() function, I find many functions about write, such as write_data_flush(), write_data_end, write_data_flush_inode. It make me confused. Would please provide more documents about Write/Read operation, thanks a lot! -- --------------------------------------------------------------- by fishwarter |
From: Anh K. H. <ky...@vi...> - 2010-08-26 02:21:24
|
On Wed, 25 Aug 2010 13:20:56 -0300 "Ricardo J. Barberis" <ric...@da...> wrote: > El Miércoles 25 Agosto 2010, Anh K. Huynh escribió: > > Hi, > > > > I have a MFS system with two chunk disks > > * a disk with size 05 GB (mounted as /mnt/mfs_m0) > > * a disk with size 30 GB (mounted as /mnt/mfs_m1) > > > > The MFS disk is /tmp/mfs_c0 was reported to have 1GB in size, as > > below: > > > > /------------------------------------------------------- > > Filesystem Size Used Avail Use% Mounted on > > /mnt/mfs_chunk_db1_0 5.0G 4.4G 288M 94% /mnt/mfs_m0 > > /mnt/mfs_chunk_db1_1 30G 27G 1.2G 96% /mnt/mfs_m1 > > mfs#10.0.0.10:9421 > > 1001M 0 1001M 0% /tmp/mfs_c0 > > > > ls /tmp/mfs_c0/ > > > > \------------------------------------------------------- > > > > All these disks were completely free (use 0%) before I tried to > > write a very large file to MFS mount point. During the writing > > process, my program detected that disk was full and it aborted. > > Unfortunately, MFS server couldn't recovery from such error, and > > though it reported that MFS disk is free (0% used), I can't write > > anything to it. Moreover, any chunk disks are full: such disks > > contains many chunk files that aren't valid anymore. > > > > How to recovery from such error? and How to clean up chunk files > > on chunk disks? How to know if any chunk files are valid for used? > > > > Thanks for your helps. > > Really strange, are you sure you didn't write the file > to /mnt/mfs_m0 directly instead of /tmp/mfs_c0 ? I'm sure that I wrote data only to /tmp/mfs_c0. A listing command show that * in /mnt/mfs_m1, there are 428 chunk files (*.mfs) * in /mnt/mfs_m0, there are 68 chunk files (*.mfs) When `mfsmaster` starts, it report that there're 496 chunks, but actually in /tmp/mfs_c0 there are no file associated with those chunks. /------------------------------------------------------------- working directory: /opt/var/mfs lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded loading metadata ... loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok checking filesystem consistency ... ok loading chunks data ... ok connecting files and chunks ... ok all inodes: 70 directory inodes: 1 file inodes: 69 chunks: 496 metadata file has been loaded stats file has been loaded master <-> metaloggers module: listen on *:9419 master <-> chunkservers module: listen on *:9420 main master server module: listen on *:9421 mfsmaster daemon initialized properly \------------------------------------------------------------- > What are /mnt/mfs_chunk_db1_0 and /mnt/mfs_chunk_db1_1, are those > loop files? (hinted by your previous email) Yes, they are all loop files created by `dd` command (as in MFS' documents). I created the file `/mnt/mfs_chunk_db1_0` first, then I created `/mnt/mfs_chunk_db1_1` with a bigger size while hoping that MFS would only write my hug data to the second disk (goal = 1). Unfortunately, MFS tried to write data two both chunks and error occured. Please note, all these happened on the same server. I tried to investigate MFS source code to find someways to check/list status of chunk files; I think such program is very useful. Regards, -- Anh Ky Huynh |
From: Ricardo J. B. <ric...@da...> - 2010-08-25 16:21:14
|
El Miércoles 25 Agosto 2010, Anh K. Huynh escribió: > Hi, > > I have a MFS system with two chunk disks > * a disk with size 05 GB (mounted as /mnt/mfs_m0) > * a disk with size 30 GB (mounted as /mnt/mfs_m1) > > The MFS disk is /tmp/mfs_c0 was reported to have 1GB in size, as below: > > /------------------------------------------------------- > Filesystem Size Used Avail Use% Mounted on > /mnt/mfs_chunk_db1_0 5.0G 4.4G 288M 94% /mnt/mfs_m0 > /mnt/mfs_chunk_db1_1 30G 27G 1.2G 96% /mnt/mfs_m1 > mfs#10.0.0.10:9421 > 1001M 0 1001M 0% /tmp/mfs_c0 > > ls /tmp/mfs_c0/ > > \------------------------------------------------------- > > All these disks were completely free (use 0%) before I tried to write a > very large file to MFS mount point. During the writing process, my program > detected that disk was full and it aborted. Unfortunately, MFS server > couldn't recovery from such error, and though it reported that MFS disk is > free (0% used), I can't write anything to it. Moreover, any chunk disks are > full: such disks contains many chunk files that aren't valid anymore. > > How to recovery from such error? and How to clean up chunk files on chunk > disks? How to know if any chunk files are valid for used? > > Thanks for your helps. Really strange, are you sure you didn't write the file to /mnt/mfs_m0 directly instead of /tmp/mfs_c0 ? What are /mnt/mfs_chunk_db1_0 and /mnt/mfs_chunk_db1_1, are those loop files? (hinted by your previous email) Regards, -- Ricardo J. Barberis Senior SysAdmin - I+D Dattatec.com :: Soluciones de Web Hosting Su Hosting hecho Simple..! |