You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Upendra M. <upe...@he...> - 2011-05-19 15:02:49
|
Hi Is there a way to check the health of mfsmaster,mfsmetalogger and mfschunkserver ? -- Thanks and Regards, Upendra.M |
From: Giovanni T. <gt...@li...> - 2011-05-19 08:07:51
|
Hi Christoph, On 19/05/2011 09:57, Christoph Raible wrote: > But I have a little problem with moosefs and KVM. I can't create > KVM-Images with virt-inst, virt-manager,... on the MooseFS-filesystem. > On the MooseFS-mailinglist there are no solutions for this Problems. Have you some specific error message to keep in consideration? Maybe could be a simple permission problem or some bad bug ;) > Now I thought you maybe have a solution for that or do you have the same > problems... On my setup I haven't got any MooseFS-related problem, can you try out some test on a plain filesystem mount? > I hope it's ok for you that I wrote to you directly. And apologise my > bad english ;) Maybe on the MooseFS mailing list someone else can give good advices than me alone. I'm adding it as CC to this reply. Regards, -- Giovanni Toraldo http://www.libersoft.it/ |
From: W K. <wk...@bn...> - 2011-05-18 19:59:58
|
And resolved. From the DoveCot manual. "By default Dovecot mmap()s the index files. This may not work with all clustered filesystems, and it most certainly won't work with NFS. Setting mmap_disable = yes disables mmap() and Dovecot does its own internal caching. If mmap() is supported by your filesystem, it's still not certain that it gives better performance. Try benchmarking to make sure." which indeed solved the problem. Sorry for the noise. -bill On 5/18/2011 12:51 PM, W Kern wrote: > More info > > This appears to be mmap with DoveCot issue. > > May 18 12:47:49 ariel2 dovecot: IMAP(xxx): mmap() failed with index > file /home/xxx/Maildir/.Archives.2011/dovecot.index: No such device > May 18 12:47:49 ariel2 dovecot: IMAP(xxx): file mail-index.c: line > 1900 (mail_index_move_to_memory): assertion failed: (index->fd == -1) > May 18 12:47:49 ariel2 dovecot: IMAP(xxx): Raw backtrace: imap > [0x80acd51] -> imap [0x80acc6c] -> imap [0x808b317] -> > imap(mail_index_open+0x48e) [0x808cefe] -> > imap(index_storage_mailbox_init+0x152) [0x8083c82] -> imap [0x80654c3] > -> imap [0x80662b2] -> imap(cmd_copy+0x258) [0x8057098] -> > imap(cmd_uid+0x50) [0x805aa00] -> imap [0x805af88] -> imap [0x805b00c] > -> imap(_client_input+0x6c) [0x805b6dc] -> > imap(io_loop_handler_run+0x110) [0x80b29f0] -> imap(io_loop_run+0x1c) > [0x80b1f2c] -> imap(main+0x4c0) [0x8063630] -> > /lib/libc.so.6(__libc_start_main+0xdc) [0x8cbe9c] -> imap [0x8056181] > May 18 12:47:49 ariel2 dovecot: child 5515 (imap) killed with signal 6 > > -bill > > On 5/18/2011 12:19 PM, W Kern wrote: >> >> Greetings. >> >> After extensive testing MFS on a test cluster, we proceeded to deploy >> a smallish (3TB) for a IMAP MailServer (DoveCot). >> >> The system is currently 3 chunkservers with a single 1TB drive, a >> Master and MetaServer, all running CentOS5.x and EXT3 with a default >> install. MooseFS is latest 1.6.20 >> The Goal is 2. >> >> Now, I know MooseFS isn't for small files but we wanted to torture it >> to see how things worked and it turns out performance is very good. >> The users can't tell the difference between the new setup and the old >> setup which was a DRBD active/passive arrangement. In fact, some >> remote users are reporting improved performance. Disk usage is a lot >> more but that was to be expected given the clusters. >> >> We have come across an issue. >> >> If a client uses Thunderbirds 3's ARCHIVE button, the Thunderbird >> program immediately reports losing a connection to the server. This >> does not occur on directed attached storage (with or without DRBD). >> >> The exact message is "Server jo...@fo... <mailto:jo...@fo...> has >> disconnected. The server may have gone down or there may be a network >> problem". In addition, although the 'archived' file has been moved >> to the Archive folder, it is no longer accessible without receiving >> that error and a look at the raw Maildir shows that the folder was >> created ".Archives.2011" >> but there is no message in there. So the archive button will 'lose' >> that email. Attempts to Archive other emails also result in a loss >> of the message. >> >> In their instructions Thunderbird describes the Archive function with >> some of the following details: >> >> * For each account, a folder can be specified to archive messages >> into with a single function. >> * *Note:* Archiving involves a /move/ operation rather than a >> /copy/. >> * The currently implemented scheme creates an Archives folder >> which contains subfolders for each year. In turn, those folders >> /can/ contain per-month subfolders (see figure to the right). >> This is determined by a preference >> *mail.server.default.archive_granularity* (or >> *mail.server.server*#*.archive_granularity* for a specific >> account) with 0=single Archives folder, 1=by-year subfolders >> (default), and 2=year/month subfolders. >> >> >> I can reproduce it on every Thunderbird 3 client irregardless of >> Linux/Mac/Win7 operating system and it occurs every single time. >> >> I will be testing Outlook shortly, but I wanted to throw this out >> there first. >> >> I am looking for ideas as to how to fix this issue or even debug it. >> >> Sincerely, >> >> -bill >> >> >> > |
From: Papp T. <to...@ma...> - 2011-05-18 19:57:24
|
On 05/18/2011 09:51 PM, W Kern wrote: > More info > > This appears to be mmap with DoveCot issue. > > May 18 12:47:49 ariel2 dovecot: IMAP(xxx): mmap() failed with index > file /home/xxx/Maildir/.Archives.2011/dovecot.index: No such device > May 18 12:47:49 ariel2 dovecot: IMAP(xxx): file mail-index.c: line > 1900 (mail_index_move_to_memory): assertion failed: (index->fd == -1) > May 18 12:47:49 ariel2 dovecot: IMAP(xxx): Raw backtrace: imap > [0x80acd51] -> imap [0x80acc6c] -> imap [0x808b317] -> > imap(mail_index_open+0x48e) [0x808cefe] -> > imap(index_storage_mailbox_init+0x152) [0x8083c82] -> imap [0x80654c3] > -> imap [0x80662b2] -> imap(cmd_copy+0x258) [0x8057098] -> > imap(cmd_uid+0x50) [0x805aa00] -> imap [0x805af88] -> imap [0x805b00c] > -> imap(_client_input+0x6c) [0x805b6dc] -> > imap(io_loop_handler_run+0x110) [0x80b29f0] -> imap(io_loop_run+0x1c) > [0x80b1f2c] -> imap(main+0x4c0) [0x8063630] -> > /lib/libc.so.6(__libc_start_main+0xdc) [0x8cbe9c] -> imap [0x8056181] > May 18 12:47:49 ariel2 dovecot: child 5515 (imap) killed with signal 6 What about disabling it? # Don't use mmap() at all. This is required if you store indexes to shared # filesystems (NFS or clustered filesystem). mmap_disable = no tamas |
From: W K. <wk...@bn...> - 2011-05-18 19:52:01
|
More info This appears to be mmap with DoveCot issue. May 18 12:47:49 ariel2 dovecot: IMAP(xxx): mmap() failed with index file /home/xxx/Maildir/.Archives.2011/dovecot.index: No such device May 18 12:47:49 ariel2 dovecot: IMAP(xxx): file mail-index.c: line 1900 (mail_index_move_to_memory): assertion failed: (index->fd == -1) May 18 12:47:49 ariel2 dovecot: IMAP(xxx): Raw backtrace: imap [0x80acd51] -> imap [0x80acc6c] -> imap [0x808b317] -> imap(mail_index_open+0x48e) [0x808cefe] -> imap(index_storage_mailbox_init+0x152) [0x8083c82] -> imap [0x80654c3] -> imap [0x80662b2] -> imap(cmd_copy+0x258) [0x8057098] -> imap(cmd_uid+0x50) [0x805aa00] -> imap [0x805af88] -> imap [0x805b00c] -> imap(_client_input+0x6c) [0x805b6dc] -> imap(io_loop_handler_run+0x110) [0x80b29f0] -> imap(io_loop_run+0x1c) [0x80b1f2c] -> imap(main+0x4c0) [0x8063630] -> /lib/libc.so.6(__libc_start_main+0xdc) [0x8cbe9c] -> imap [0x8056181] May 18 12:47:49 ariel2 dovecot: child 5515 (imap) killed with signal 6 -bill On 5/18/2011 12:19 PM, W Kern wrote: > > Greetings. > > After extensive testing MFS on a test cluster, we proceeded to deploy > a smallish (3TB) for a IMAP MailServer (DoveCot). > > The system is currently 3 chunkservers with a single 1TB drive, a > Master and MetaServer, all running CentOS5.x and EXT3 with a default > install. MooseFS is latest 1.6.20 > The Goal is 2. > > Now, I know MooseFS isn't for small files but we wanted to torture it > to see how things worked and it turns out performance is very good. > The users can't tell the difference between the new setup and the old > setup which was a DRBD active/passive arrangement. In fact, some > remote users are reporting improved performance. Disk usage is a lot > more but that was to be expected given the clusters. > > We have come across an issue. > > If a client uses Thunderbirds 3's ARCHIVE button, the Thunderbird > program immediately reports losing a connection to the server. This > does not occur on directed attached storage (with or without DRBD). > > The exact message is "Server jo...@fo... <mailto:jo...@fo...> has > disconnected. The server may have gone down or there may be a network > problem". In addition, although the 'archived' file has been moved > to the Archive folder, it is no longer accessible without receiving > that error and a look at the raw Maildir shows that the folder was > created ".Archives.2011" > but there is no message in there. So the archive button will 'lose' > that email. Attempts to Archive other emails also result in a loss of > the message. > > In their instructions Thunderbird describes the Archive function with > some of the following details: > > * For each account, a folder can be specified to archive messages > into with a single function. > * *Note:* Archiving involves a /move/ operation rather than a /copy/. > * The currently implemented scheme creates an Archives folder > which contains subfolders for each year. In turn, those folders > /can/ contain per-month subfolders (see figure to the right). > This is determined by a preference > *mail.server.default.archive_granularity* (or > *mail.server.server*#*.archive_granularity* for a specific > account) with 0=single Archives folder, 1=by-year subfolders > (default), and 2=year/month subfolders. > > > I can reproduce it on every Thunderbird 3 client irregardless of > Linux/Mac/Win7 operating system and it occurs every single time. > > I will be testing Outlook shortly, but I wanted to throw this out > there first. > > I am looking for ideas as to how to fix this issue or even debug it. > > Sincerely, > > -bill > > > |
From: W K. <wk...@bn...> - 2011-05-18 19:19:55
|
Greetings. After extensive testing MFS on a test cluster, we proceeded to deploy a smallish (3TB) for a IMAP MailServer (DoveCot). The system is currently 3 chunkservers with a single 1TB drive, a Master and MetaServer, all running CentOS5.x and EXT3 with a default install. MooseFS is latest 1.6.20 The Goal is 2. Now, I know MooseFS isn't for small files but we wanted to torture it to see how things worked and it turns out performance is very good. The users can't tell the difference between the new setup and the old setup which was a DRBD active/passive arrangement. In fact, some remote users are reporting improved performance. Disk usage is a lot more but that was to be expected given the clusters. We have come across an issue. If a client uses Thunderbirds 3's ARCHIVE button, the Thunderbird program immediately reports losing a connection to the server. This does not occur on directed attached storage (with or without DRBD). The exact message is "Server jo...@fo... has disconnected. The server may have gone down or there may be a network problem". In addition, although the 'archived' file has been moved to the Archive folder, it is no longer accessible without receiving that error and a look at the raw Maildir shows that the folder was created ".Archives.2011" but there is no message in there. So the archive button will 'lose' that email. Attempts to Archive other emails also result in a loss of the message. In their instructions Thunderbird describes the Archive function with some of the following details: * For each account, a folder can be specified to archive messages into with a single function. * *Note:* Archiving involves a /move/ operation rather than a /copy/. * The currently implemented scheme creates an Archives folder which contains subfolders for each year. In turn, those folders /can/ contain per-month subfolders (see figure to the right). This is determined by a preference *mail.server.default.archive_granularity* (or *mail.server.server*#*.archive_granularity* for a specific account) with 0=single Archives folder, 1=by-year subfolders (default), and 2=year/month subfolders. I can reproduce it on every Thunderbird 3 client irregardless of Linux/Mac/Win7 operating system and it occurs every single time. I will be testing Outlook shortly, but I wanted to throw this out there first. I am looking for ideas as to how to fix this issue or even debug it. Sincerely, -bill |
From: Thomas S H. <tha...@gm...> - 2011-05-18 14:52:31
|
On Wed, May 18, 2011 at 8:44 AM, Papp Tamas <to...@ma...> wrote: > > On 05/18/2011 04:35 PM, Thomas S Hatch wrote: > > With MooseFS your biggest bottlenecks are chunkserver disk speed, and > > hitting the mfsmaster too often. like Michal said, if you have a lot > > of very small files you will place more load on the master. As for the > > disk speed, I use RAID 0 for my chunkservers in groups of 2-4 disks > > and we get 60-90MB throughput. Right now I have a ~500 TB MooseFS > > setup, so I think that a 24T setup should be no problem at all! > > Do you mean this speed with one per client or summary of clients speed? > > Thank you, > > tamas > > > ------------------------------------------------------------------------------ > What Every C/C++ and Fortran developer Should Know! > Read this article and learn how Intel has extended the reach of its > next-generation tools to help Windows* and Linux* C/C++ and Fortran > developers boost performance applications - including clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > We see that speed on average across our clients, we also use 10G networking for the storage backend. -Thomas S Hatch |
From: Papp T. <to...@ma...> - 2011-05-18 14:44:38
|
On 05/18/2011 04:35 PM, Thomas S Hatch wrote: > With MooseFS your biggest bottlenecks are chunkserver disk speed, and > hitting the mfsmaster too often. like Michal said, if you have a lot > of very small files you will place more load on the master. As for the > disk speed, I use RAID 0 for my chunkservers in groups of 2-4 disks > and we get 60-90MB throughput. Right now I have a ~500 TB MooseFS > setup, so I think that a 24T setup should be no problem at all! Do you mean this speed with one per client or summary of clients speed? Thank you, tamas |
From: Thomas S H. <tha...@gm...> - 2011-05-18 14:42:56
|
Hi, I have effectively 13 filesystems on one mfsmaster, but only one underlying "filesystem". The thing to do is to just have your clients mount subdirectories under the moosefs root directory, so the moosefs root directory looks like this: /stuff/<files> /morestuff/<files> and then on the client you mount like this: mfsmount <mountpoint> -S stuff # mounts the /stuff directory to the mountpoint instead of the whole moosefs mount I use this to partition out data for different environments and different types of data, so I have: /users /prod/media /qa/media etc. On Wed, May 18, 2011 at 2:02 AM, Christoph Raible < c.r...@sc...> wrote: > Hi > > I have a question about configuration of the MooseFS-Filesystem. I got > the following infrastructure... > > 4 Chunkservers > 1 Master > 2 clients > > > Now I want 2 Filesystems (FS1 and FS2) > In FS1 is Server 1+2 and in FS2 is Server 3+4. > > Both should be handled by the one Masterserver and the clients should > connect to both Filesystems... > > Now I don't know if this is possible... or do I need one Master for a > Filesystem? > > > It would be great if someone can help me :) > > > Best regards, > > Ch.Raible > -- > Vorstand/Board of Management: > Dr. Bernd Finkbeiner, Dr. Roland Niemeier, > Dr. Arno Steitz, Dr. Ingrid Zech > Vorsitzender des Aufsichtsrats/ > Chairman of the Supervisory Board: > Philippe Miltin > Sitz/Registered Office: Tuebingen > Registergericht/Registration Court: Stuttgart > Registernummer/Commercial Register No.: HRB 382196 > > > > > ------------------------------------------------------------------------------ > What Every C/C++ and Fortran developer Should Know! > Read this article and learn how Intel has extended the reach of its > next-generation tools to help Windows* and Linux* C/C++ and Fortran > developers boost performance applications - including clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Thomas S H. <tha...@gm...> - 2011-05-18 14:41:34
|
With MooseFS your biggest bottlenecks are chunkserver disk speed, and hitting the mfsmaster too often. like Michal said, if you have a lot of very small files you will place more load on the master. As for the disk speed, I use RAID 0 for my chunkservers in groups of 2-4 disks and we get 60-90MB throughput. Right now I have a ~500 TB MooseFS setup, so I think that a 24T setup should be no problem at all! -Thomas S Hatch 2011/5/18 Michal Borychowski <mic...@ge...> > Hi! > > > > I guess you mean 1200-1300 M*bits*/second? Which would make 150 M*bytes* > /second? > > > > Unfortunately I doubt if you can achieve such speeds with MooseFS. Unless > you use some SSD disks for chunkservers which would be very expensive... We > have write speeds of about 20-30 MiB/s and reads of 30-50MiB/s at our > environment with goal=2. > > > > On the other hand MooseFS would be perfect just for storing the content (it > is much better optimised for large files, not the small ones). For editing > purposes probably you should have separate machines and think of good "flow" > of the files. > > > > > > Kind regards > > Michał Borychowski > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > > > *From:* Didi Pramujadi [mailto:di...@me...] > *Sent:* Wednesday, May 18, 2011 4:03 AM > *To:* moo...@li... > *Subject:* [Moosefs-users] Question for using moosefs on post production > > > > Dear Friends, > > Hello everyone, I'm just joining this mailing list. > > I'm helping my boss to research the possibility of building a custom shared > storage for editing in his post production house. The requirement is to > provide a 24TB shared storage with troughput around 1200-1300 MB/s (for > simultaneous or concurrent editing of 2 station with 2k HD resolution + copy > file). I'm thinking of using iSCSI, Infiniband or FcOE for the transport. > > My questions is could moosefs utilized for this purpose ? Hows the details > hardware requirement for this? > > I'm sorry if this is too basic, thanks in advance. > > Best Regards, > > -- > Didi Pramujadi > > Business Development > PT. Media Mozaic Indonesia > > www.mediamozaic.com > cellphone : +62811834579 > Fax : +217408701 > skype : didipramujadi > > > ------------------------------------------------------------------------------ > What Every C/C++ and Fortran developer Should Know! > Read this article and learn how Intel has extended the reach of its > next-generation tools to help Windows* and Linux* C/C++ and Fortran > developers boost performance applications - including clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Christoph R. <c.r...@sc...> - 2011-05-18 08:17:13
|
Hi I have a question about configuration of the MooseFS-Filesystem. I got the following infrastructure... 4 Chunkservers 1 Master 2 clients Now I want 2 Filesystems (FS1 and FS2) In FS1 is Server 1+2 and in FS2 is Server 3+4. Both should be handled by the one Masterserver and the clients should connect to both Filesystems... Now I don't know if this is possible... or do I need one Master for a Filesystem? It would be great if someone can help me :) Best regards, Ch.Raible -- Vorstand/Board of Management: Dr. Bernd Finkbeiner, Dr. Roland Niemeier, Dr. Arno Steitz, Dr. Ingrid Zech Vorsitzender des Aufsichtsrats/ Chairman of the Supervisory Board: Philippe Miltin Sitz/Registered Office: Tuebingen Registergericht/Registration Court: Stuttgart Registernummer/Commercial Register No.: HRB 382196 |
From: Michal B. <mic...@ge...> - 2011-05-18 06:32:33
|
Hi! I guess you mean 1200-1300 Mbits/second? Which would make 150 Mbytes/second? Unfortunately I doubt if you can achieve such speeds with MooseFS. Unless you use some SSD disks for chunkservers which would be very expensive. We have write speeds of about 20-30 MiB/s and reads of 30-50MiB/s at our environment with goal=2. On the other hand MooseFS would be perfect just for storing the content (it is much better optimised for large files, not the small ones). For editing purposes probably you should have separate machines and think of good "flow" of the files. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Didi Pramujadi [mailto:di...@me...] Sent: Wednesday, May 18, 2011 4:03 AM To: moo...@li... Subject: [Moosefs-users] Question for using moosefs on post production Dear Friends, Hello everyone, I'm just joining this mailing list. I'm helping my boss to research the possibility of building a custom shared storage for editing in his post production house. The requirement is to provide a 24TB shared storage with troughput around 1200-1300 MB/s (for simultaneous or concurrent editing of 2 station with 2k HD resolution + copy file). I'm thinking of using iSCSI, Infiniband or FcOE for the transport. My questions is could moosefs utilized for this purpose ? Hows the details hardware requirement for this? I'm sorry if this is too basic, thanks in advance. Best Regards, -- Didi Pramujadi Business Development PT. Media Mozaic Indonesia www.mediamozaic.com cellphone : +62811834579 Fax : +217408701 skype : didipramujadi |
From: Didi P. <di...@me...> - 2011-05-18 02:02:58
|
Dear Friends, Hello everyone, I'm just joining this mailing list. I'm helping my boss to research the possibility of building a custom shared storage for editing in his post production house. The requirement is to provide a 24TB shared storage with troughput around 1200-1300 MB/s (for simultaneous or concurrent editing of 2 station with 2k HD resolution + copy file). I'm thinking of using iSCSI, Infiniband or FcOE for the transport. My questions is could moosefs utilized for this purpose ? Hows the details hardware requirement for this? I'm sorry if this is too basic, thanks in advance. Best Regards, -- Didi Pramujadi Business Development PT. Media Mozaic Indonesia www.mediamozaic.com cellphone : +62811834579 Fax : +217408701 skype : didipramujadi |
From: Ricardo J. B. <ric...@da...> - 2011-05-17 21:41:46
|
El Martes 17 May 2011, Tomas Lovato escribió: > I have a question. > I setup 8 chunk servers, each with 6 hdd's. I tried various file > systems(mdraid 0 and 5 and jbod) layouts with a goal of 2. I was wondering > if mfs knows that if I have 6 individual file systems per server, that it > shouldn't store two copies of a file on the same server. It would be much > easier to add additional disks further down the line if I don't have to > expand or grow a raid file systems on each server. But I want to make sure > that mfs knows not to store copies of files on > > Otherwise I was very happy with the testing so far. Copies are distributed in different servers, not in different disks on the same server, as long as you're using goal > 1. Cheers, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: jose m. <let...@us...> - 2011-05-17 18:54:24
|
* <http://www.opennebula.org/software:ecosystem:moosefs> |
From: Tomas L. <lo...@st...> - 2011-05-17 16:53:30
|
I have a question. I setup 8 chunk servers, each with 6 hdd's. I tried various file systems(mdraid 0 and 5 and jbod) layouts with a goal of 2. I was wondering if mfs knows that if I have 6 individual file systems per server, that it shouldn't store two copies of a file on the same server. It would be much easier to add additional disks further down the line if I don't have to expand or grow a raid file systems on each server. But I want to make sure that mfs knows not to store copies of files on Otherwise I was very happy with the testing so far. |
From: Michal B. <mic...@ge...> - 2011-05-17 09:12:36
|
Hi Andrey! Unfortunately we couldn't recreate this error. Is this repeatable in your environment? Can you give us some more tips how to recreate it? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Andrey Belashkov [mailto:ma...@ca...] Sent: Tuesday, April 26, 2011 10:50 PM To: moo...@li... Subject: [Moosefs-users] dbench on moosefs on freebsd Hello We use moosefs 1.6.20 from ports on FreeBSD 8.2-RELEASE amd64 on ufs. When we trying 'dbench 500' on moosefs share, after some time got in dmesg: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 pid 3519 (mfsmount), uid 0: exited on signal 6 (core dumped) and in shell with dbench: 500 762 18.22 MB/sec execute 373 sec latency 12651.732 ms 500 763 18.17 MB/sec execute 374 sec latency 13687.228 ms 500 764 18.12 MB/sec execute 375 sec latency 14727.718 ms [765] open ./clients/client309/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10032 (Device not configu red) [768] write failed on handle 10032 (Socket is not connected) (766) ERROR: handle 10032 was not found [768] write failed on handle 10032 (Device not configured) [738] open ./clients/client272/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10026 (Socket is not connected) (739) ERROR: handle 10026 was not found [773] unlink ./clients/client204/~dmtmp/PWRPNT/PPTC112.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 204 failed at line 773 [768] write failed on handle 10032 (Socket is not connected) Child failed with status 1 [718] open ./clients/client399/~dmtmp/PWRPNT/NEWPCB.PPT failed for handle 10024 (Socket is not connected) [710] rename ./clients/client136/~dmtmp/PWRPNT/NEWPCB.PPT ./clients/client136/~dmtmp/PWRPNT/PPTB1E4.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 136 failed at line 710 (719) ERROR: handle 10024 was not found [710] rename ./clients/client455/~dmtmp/PWRPNT/NEWPCB.PPT ./clients/client455/~dmtmp/PWRPNT/PPTB1E4.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 455 failed at line 710 [808] open ./clients/client482/~dmtmp/PWRPNT/PPTC112.TMP failed for handle 10042 (Socket is not connected) [811] unlink ./clients/client161/~dmtmp/PWRPNT/PPTC112.TMP failed (Socket is not connected) - expected NT_STATUS_OK ERROR: child 161 failed at line 811 (809) ERROR: handle 10042 was not found then trying `ls /mnt/mfs`: # ls /mnt/mfs ls: .: Socket is not connected It`s normal behaviour or can be tuned? Thanks. ---------------------------------------------------------------------------- -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-05-17 09:07:58
|
Hi! This is normal behaviour. Snapshot is made on the master in an atomic way and while the snapshot is being made other operations on the master are sustained. Snapshot for 5.5 million files need to take some time. We suggest instead of doing one big snapshot doing several smaller – maybe for individual folders. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Wesley Shen. [mailto:wcs...@gm...] Sent: Wednesday, May 04, 2011 9:20 AM To: moo...@li... Subject: [Moosefs-users] I always get a slow response about accessing files when I take a snapshot. Hi all, I have a question about mfsmakesnapshot. Description: Master、Chunk、Clint MooseFS Version:1.6.15 Master、Chunk、Clint Operating System:CentOS 5.5 x86_64 Master、Chunk、Clint Filesystem:ext3 I have a client that it is a web server,and I mount(via mfsmount) a SRC directory of mfs master server on local TEST directory. And I will make a snapshot(via mfsmakesnapshot) with SRC directory that it contains about 5,600,000 files. If I take a snapshot on mfs master server and client aceess a file of TEST directory at the same time, I always get a slow response about accessing files. Does anyone have any suggestion to resolve the question ? BTW, how can I get a consultant to support MooseFS in a company? (Donate or pay for consulting) -- .:: Best Regards ::. |
From: Michal B. <mic...@ge...> - 2011-05-16 07:51:12
|
Hi! You could also just wait and the file disappears by itself. Of course you can use mfsfilerepair. Regards Michał From: MarlboroMoo [mailto:mar...@gm...] Sent: Monday, May 16, 2011 8:23 AM To: moo...@li... Subject: Re: [Moosefs-users] currently unavailable reserved file ? never mind, i use "mfsfilerepair" to fix it, thanks :D 2011/5/11 MarlboroMoo <mar...@gm...> Hi there, we use the MFS 1.6.17, and got a error because our network problem, the message like below: currently unavailable chunk 00000000049E301B (inode: 10022376 ; index: 0) + currently unavailable reserved file 10022376: search/property/some.file.properties unavailable chunks: 1 unavailable reserved files: 1 how can i solve this problem, remove this file from meta ? thanks in advance ! -- Marlboromoo -- Marlboromoo |
From: Michal B. <mic...@ge...> - 2011-05-16 07:44:59
|
Hi! We at Gemius do not use any RAID as we do not see any real profit in it. Here are some of our thoughts on the subject: - stability: if one disk fails it doesn't mean the whole chunkserver is disconnected; chunkserver monitors the disks and just disconnect the broken one - less disks: by raid5, yes, by raid10 no; raid10 + goal=2 would be the same as separate disks and goal=4 - but do you need security as goal=4? Probably not and goal=3 and separate disks will demand less disks. And it is better to have data spread over three computers than over two. - speed: one would need to do detailed tests; as for individual operations for read/write RAID for sure would be quicker, but when talking about a real life environment which is heavily used, we are not sure about the RAID performance. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: wk...@bn... [mailto:wk...@bn...] Sent: Wednesday, May 11, 2011 7:26 AM To: moo...@li... Subject: [Moosefs-users] Raid for Chunkserver? Greetings: We are testing MFS and have been very impressed so far. One discussion that we are having internally is the desireability to use RAID (either Raid0, Raid1 or Raid10) on chunkservers. Obviously with a goal of 2 or more, its not a protection issue, which is the classic RAID scenario. as we see it, RAID on chunkservers has the following to recommend it. + stability: Assuming RAID1/10, if a drive dies the chunkserver doesn't immediately fall out of the cluster, and forcing a need to rebalance. We can fail it deliberately at a time of our choosing to replace the drive. + less disks: with RAID1/10 we might feel better using a goal of 2 instead of 3. + speed: RAID0 would be faster outright. RAID1/10 may provide faster on reads and slightly slower on writes (RAID 10 much better). The reasons against would be: - stability: with RAID0, the loss of a single drive kills the entire chunkserver. - stability: another layer to fail (the MD layer). - cost: increased power consumption and of course 2x or more the number of drives. We would be using Linux SoftRaid (MD driver) rather than hardware raid, so the card cost is not an issue. - speed: slightly slower writes on RAID1/10 So what is the consensus of the more experienced users? Are you using RAID (0,1,10 or others) on your chunkservers? Are we missing something on the above analysis? -bill ---------------------------------------------------------------------------- -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-05-16 07:15:07
|
Hi Bob! 1. Yes, you can. 2. You need to make tests by yourself :) 3. We do not recommend any filesystem. We use just ext3 and are happy. There are some users which indeed like XFS. 4. Generally you should keep the master IP and not modify tens of /etc/hosts in clients computers. But resolving in mfsmount is not very difficult and we added it in the development version of MooseFS and it will be available in next release. But still, we recommend keeping the IP address if the old master got corrupted and you need to set up a new one. Kind regards Michał From: boblin [mailto:ai...@qq...] Sent: Sunday, April 24, 2011 5:40 PM To: Michal Borychowski Cc: moosefs-users Subject: [Moosefs-users] Questions about file systems in MFS Hi, Michal : 1. Can I use serveral different filesystems (XFS/EXT3/ReiserFS) in a single MFS system ? 2. How deeply the filesystem of chunkservers affects the performance ? 3. Which filesystem is more suitable for MFS ? Someone told that XFS is better , is it true? 4. When master failover to another "new IP" , the client still try the old one even when I modify the /etc/hosts file , while metaloger and chunkserver will automatically switch to the new IP . Why can't client just do that ? Best Regards Bob |
From: Anh K. H. <ky...@vi...> - 2011-05-16 07:11:51
|
On Sun, 15 May 2011 15:21:48 +0200 Robin Waarts <li...@wa...> wrote: > I've had the same problemen, mine only where updated if the > filesize changed, maybe it is the same.. > > My problem was that I used the "mfscachemode=YES" when mouting, > changing this to "mfscachemode=AUTO" solved this for me. Thank you for the tip. I will test and give the feedback soon. Best regards, > Op 14-5-2011 4:40, Anh K. Huynh schreef: > > Hello, > > > > I've just encountered a problem with MooseFS. Two of my servers > > share a same directory /foo/bar/ via a MFS master. The contents > > of the directory are often updated. The problem is that: > > > > * when a file of the directory is updated on the first server, > > * the client on the second server still sees the old version of > > that file. > > > > If the client on the second server exists (by the command 'exit' > > on SSH terminal), and log in to the server again, they would see > > the latest version of the file. > > > > So my question is: How to force all clients to see a same version > > (the latest one) of a file when that file is updated on any mfs > > client? > > > > My MFS setting: one master, three chunk servers, all files have > > the goal 2, trash files have the goal 1. These servers are > > located in a 10 MB/s network. > > > > Thank you for your helps, > > > > Regards, > > > > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: Michal B. <mic...@ge...> - 2011-05-16 06:58:01
|
Hi! You cannot use 127.0.0.1 as IP of chunkservers. Regarding the speed while starting - you have 5 million chunks on one disk - it has to take some time :) Scanning is made in parallel on several disks, so if you have any RAID, it would be better to switch it off and use the disks individually. On the other hand, we did speed it up in 1.6.22 (not yet released), as we read file attributes in a lazy way - upon the first request. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Papp Tamas [mailto:to...@ma...] Sent: Wednesday, April 27, 2011 11:02 PM To: moo...@li... Subject: Re: [Moosefs-users] starting chunkserver On 04/27/2011 09:56 PM, Papp Tamas wrote: I don't get the error message. What's the problem with the host name, where and how should I use IP address? Well, I moved mfsmaster entry in the hosts file from 127.0.0.1 to the external IP address, and now it works: $ /etc/init.d/mfs-chunkserver start Starting mfs-chunkserver: working directory: /var/lib/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /data/ ... hdd space manager: scanning complete hdd space manager: /data/: 5122835 chunks found hdd space manager: scanning complete main server module: listen on *:9422 stats file has been loaded mfschunkserver daemon initialized properly mfschunkserver. But if it's possible, it's starting far more slower while scanning. It's really-really slow. Why? Is there any way to speed it up? Thank you, tamas |
From: Michal B. <mic...@ge...> - 2011-05-16 06:47:21
|
Hi! Unfortunately there is no easy way to do it, it is quite complicated. Our future tool set 'mfstools' probably would have possibility to tell to which file a given chunk belongs. And later you need a simple script which asks about the chunks. Now you can try to use a file from "mfsmetadump". There are chunk numbers for each i-node (lines beginning with '-') in this file. There are also connections for names tree (lines beginning with 'E'). So having a list of chunk numbers (by 'ls' on chunkserver) you can find i-node numbers and you can take respective names from the file from mfsmetadump. Kind regards -Michal From: boblin [mailto:ai...@qq...] Sent: Monday, April 25, 2011 4:09 PM To: moosefs-users Subject: [Moosefs-users] How can I know what files are stored in a chunkserver ? Dear guys : As we know , we can use "mfsfielinfo" command to find out which chunksever the given file resides . But how can I find out what files resides on a chunkserver without scanning the whole filesystem ? Best Regards ! bob |
From: Michal B. <mic...@ge...> - 2011-05-16 06:29:49
|
Hi Fyodor! You can change REPLICATIONS_DELAY_DISCONNECT in mfsmaster.cfg. Default is 3600 seconds, you can lower the value to your needs. Regards -Michal -----Original Message----- From: Fyodor Ustinov [mailto:uf...@uf...] Sent: Friday, April 22, 2011 1:13 AM To: moo...@li... Subject: [Moosefs-users] How to reduce "start goal restore" time? Hi! How to reduce the time before start recovery the number of goals after a "chunks" server destruction? WBR, Fyodor. ---------------------------------------------------------------------------- -- Fulfilling the Lean Software Promise Lean software platforms are now widely adopted and the benefits have been demonstrated beyond question. Learn why your peers are replacing JEE containers with lightweight application servers - and what you can gain from the move. http://p.sf.net/sfu/vmware-sfemails _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |