You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Anh K. H. <ky...@vi...> - 2010-08-25 05:02:26
|
Hi, I have a MFS system with two chunk disks * a disk with size 05 GB (mounted as /mnt/mfs_m0) * a disk with size 30 GB (mounted as /mnt/mfs_m1) The MFS disk is /tmp/mfs_c0 was reported to have 1GB in size, as below: /------------------------------------------------------- Filesystem Size Used Avail Use% Mounted on /mnt/mfs_chunk_db1_0 5.0G 4.4G 288M 94% /mnt/mfs_m0 /mnt/mfs_chunk_db1_1 30G 27G 1.2G 96% /mnt/mfs_m1 mfs#10.0.0.10:9421 1001M 0 1001M 0% /tmp/mfs_c0 ls /tmp/mfs_c0/ \------------------------------------------------------- All these disks were completely free (use 0%) before I tried to write a very large file to MFS mount point. During the writing process, my program detected that disk was full and it aborted. Unfortunately, MFS server couldn't recovery from such error, and though it reported that MFS disk is free (0% used), I can't write anything to it. Moreover, any chunk disks are full: such disks contains many chunk files that aren't valid anymore. How to recovery from such error? and How to clean up chunk files on chunk disks? How to know if any chunk files are valid for used? Thanks for your helps. -- Anh Ky Huynh |
From: Anh K. H. <ky...@vi...> - 2010-08-25 02:59:41
|
On Fri, 20 Aug 2010 12:58:40 +0200 Michał Borychowski <mic...@ge...> wrote: > You can easily use MooseFS to store any kind of content, including > log files. Performance should not be affected. But it depends on > the logging mechanism, if for writing every line the file is > continuously opened, data appended and the file closed, for sure it > is not perfect way to save logs. Here's my simple test results on AWS EC2 environment: A 'dd' command tries to write about 540MB to a MFS disk, the speed is very low (28MB/s). On the same server with native disk (instant disk of any AWS EC2 instance), the same command yields the speed around 140MB/s. I don't know if I can deploy a database server on any MFS disk ... :( /-------------------------------------------------------------- $ dd if=/dev/zero of=/tmp/mfs_c0/test bs=1024 count=524288 524288+0 records in 524288+0 records out 536870912 bytes (537 MB) copied, 18.9801 s, 28.3 MB/s $ dd if=/dev/zero of=/tmp/test bs=1024 count=524288 524288+0 records in 524288+0 records out 536870912 bytes (537 MB) copied, 3.95007 s, 136 MB/s \-------------------------------------------------------------- > And you have to remember about one thing. Different clients cannot > write at the same moment to the same file located in MooseFS. So if > you have several httpd servers, let them save the logs under > different filenames (eg. access-192.168.0.1.log, > access-192.168.0.2.log, etc.) > > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Thursday, August 12, 2010 12:21 PM > To: moo...@li... > Subject: [Moosefs-users] Moosefs for logs > > Hi, > > I am a moosefs newbie. I am using EC2 instances and I intended to > build a moosefs system to share EBS disks between instances. > > My question is that: can I use moosefs for logs? My applications > (web server, applications) need to write to logs files, but I don't > know if there's any performance problem when logs are written to > moosefs' disk. > > Thank you for your helps, > > Regards, > -- Anh Ky Huynh |
From: Anh K. H. <ky...@vi...> - 2010-08-24 08:12:46
|
Hello, I'd like to asks about security issues when sharing disks. E.g, I have the following schema MFS master | \--- mfs chunk 1 (/mnt/share1/ <-- chunk_file_1 on local disk) | \--- mfs chunk 2 (/mnt/share2/ <-- chunk_file_2 on local disk) As far as I understand, the contents of /mnt/share1 and /mnt/share2 are synchronized. So what's happened if some private data on /mnt/share1/ appear in /mnt/share2/ ? If the chunk server 2 is comprised, all data on /mnt/share2 can be recovered? Actually, I want to deploy a system to share disks between instances; each instance will be a "MFS chunk" to provide disks to MFS master, but it also mounts some directories from MFS system to to store private data: server1 | \--- local_disk_to_share ----> MFS master | | \--- MFS mount point <---------/ | \---- local data More precisely, an instance won't use its local disk directly, but will use its local disk via MFS system. Is this a good deployment of MFS? Thank you for you replies. Regards, -- Anh Ky Huynh |
From: Michał B. <mic...@ge...> - 2010-08-24 08:10:06
|
Hi! Please find the "Taking snapshots" section on http://www.moosefs.org/reference-guide.html webpage. Prepare a development environment with MooseFS and have some tests with the snapshots. If you would need any further assistance please ask. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Peisheng Shuai [mailto:fit...@gm...] Sent: Friday, August 20, 2010 2:12 PM To: moo...@li... Subject: [Moosefs-users] Question about snapshot & changelog from newbie Hi, excuse me for this stupid but simple yes/no question and my poor English... I'm just a moosefs newbie for 2 days, and I want to know if moosefs has a function which can restore the storage to any time since it started ( I mean if I realize I made some terrible changes on files or I want to rollback, then I can simply use moosefs to restore the storage to any past status, maybe 10 hours ago, yesterday, last Thursday 9:00 am, any time they or me want ) I've been told by my manager that moosefs can do that but I didn't find any information or documentation about it. First, I thought it should be similar to snapshot in zfs, and I can write scripts to backup snapshot of the storage status and use them to restore. Or maybe changelog record time for changes. But he said moosefs automatically did that and all I should do is to delete the old snapshot backups and learn how to restore like that... Then, I got confused and stuck. Is it relevant to changelog? I got v1.6.15 master and chunk running on gentoo linux server, metalogger running on centos. and all use ext3 file system. Appreciate for any help! |
From: Michał B. <mic...@ge...> - 2010-08-24 08:05:04
|
As far as we know one of the biggest production installations of MooseFS is at our company - Gemius (http://www.gemius.com). We have four deployments, the biggest has almost 30 million files distributed over 70 chunk servers having a total space of 570TiB. Chunkserver machines at the same time are used to make other calculations. You can see several screens from our CGI monitor at SourceForge.net website: http://sourceforge.net/projects/moosefs/ Another Polish company which uses MooseFS for data storage is Redefine (http://www.redefine.pl/). I also saw a forum entry where somebody said they had 1,5 PT of data stored on MooseFS but I could not confirm it. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Friday, August 20, 2010 1:33 PM To: moo...@li... Subject: [Moosefs-users] What is the biggest installation? Hi! Sorry for the stupid question... What is known biggest production installation of the MooseFS ? Can you describe hardware and software config ? wbr Alexander Akhobadze ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Anh K. H. <ky...@vi...> - 2010-08-24 07:56:06
|
On Fri, 20 Aug 2010 12:58:40 +0200 Michał Borychowski <mic...@ge...> wrote: > You can easily use MooseFS to store any kind of content, including > log files. Performance should not be affected. But it depends on > the logging mechanism, if for writing every line the file is > continuously opened, data appended and the file closed, for sure it > is not perfect way to save logs. > > And you have to remember about one thing. Different clients cannot > write at the same moment to the same file located in MooseFS. So if > you have several httpd servers, let them save the logs under > different filenames (eg. access-192.168.0.1.log, > access-192.168.0.2.log, etc.) Thank you for your tips. I am installing and testing MFS :) I will use different locations for different MFS clients. Regards, > -----Original Message----- > From: Anh K. Huynh [mailto:ky...@vi...] > Sent: Thursday, August 12, 2010 12:21 PM > To: moo...@li... > Subject: [Moosefs-users] Moosefs for logs > > Hi, > > I am a moosefs newbie. I am using EC2 instances and I intended to > build a moosefs system to share EBS disks between instances. > > My question is that: can I use moosefs for logs? My applications > (web server, applications) need to write to logs files, but I don't > know if there's any performance problem when logs are written to > moosefs' disk. > > Thank you for your helps, > > Regards, > -- Anh Ky Huynh |
From: Kristofer P. <kri...@cy...> - 2010-08-23 19:55:56
|
Never mind, I figured this one out. From: "Kristofer Pettijohn" <kri...@cy...> To: moo...@li... Sent: Monday, August 23, 2010 7:52:56 AM Subject: [Moosefs-users] Mount points Hello, I am testing out MooseFS and have had great success and am very fond of its features.. great job on the product so far guys, and definitely keep up the good work! I do have a question. When using mfsmount to mount a filesystem, when I mount /home, it gets mounted from /home on the MFS directory tree. How can I mount /web/home on the MFS side to /home on the client system? It doens't (similiar to NFS mounting "mount nfsserver:/web/home /home") Thanks, Kris |
From: Kristofer P. <kri...@cy...> - 2010-08-23 12:53:06
|
Hello, I am testing out MooseFS and have had great success and am very fond of its features.. great job on the product so far guys, and definitely keep up the good work! I do have a question. When using mfsmount to mount a filesystem, when I mount /home, it gets mounted from /home on the MFS directory tree. How can I mount /web/home on the MFS side to /home on the client system? It doens't (similiar to NFS mounting "mount nfsserver:/web/home /home") Thanks, Kris |
From: Kristofer P. <kri...@cy...> - 2010-08-23 12:51:44
|
Hello, I am testing out MooseFS and have had great success and am very fond of its features.. great job on the product so far guys, and definitely keep up the good work! I do have a question. When using mfsmount to mount a filesystem, when I mount /home, it gets mounted from /home on the MFS directory tree. How can I mount /web/home on the MFS side to /home on the client system? It doens't (similiar to NFS mounting "mount nfsserver:/web/home /home") Thanks, Kris |
From: Scoleri, S. <Sco...@gs...> - 2010-08-21 16:52:53
|
I think this is more a function/disability of fuse but has anyone found a work around or figured out a way to use a loop dev like one created with losetup on a moosefs? This is probably related, what about tap:aio for virtual block devs? Seems many of the linux distro vendors are writing all VM Guest creation code and defaulting to back dev as a tap:aio device instead of file or even giving you an option of file. For example virt-install that comes with Redhat. I've been searching for a way to do this for sometime. file type back devs for VMs work just fine. Thanks, -Scoleri |
From: Peisheng S. <fit...@gm...> - 2010-08-20 12:12:23
|
Hi, excuse me for this stupid but simple yes/no question and my poor English... I'm just a moosefs newbie for 2 days, and I want to know if moosefs has a function which can restore the storage to any time since it started ( I mean if I realize I made some terrible changes on files or I want to rollback, then I can simply use moosefs to restore the storage to any past status, maybe 10 hours ago, yesterday, last Thursday 9:00 am, any time they or me want ) I've been told by my manager that moosefs can do that but I didn't find any information or documentation about it. First, I thought it should be similar to snapshot in zfs, and I can write scripts to backup snapshot of the storage status and use them to restore. Or maybe changelog record time for changes. But he said moosefs automatically did that and all I should do is to delete the old snapshot backups and learn how to restore like that... Then, I got confused and stuck. Is it relevant to changelog? I got v1.6.15 master and chunk running on gentoo linux server, metalogger running on centos. and all use ext3 file system. Appreciate for any help! |
From: Alexander A. <akh...@ri...> - 2010-08-20 11:33:18
|
Hi! Sorry for the stupid question... What is known biggest production installation of the MooseFS ? Can you describe hardware and software config ? wbr Alexander Akhobadze |
From: Michał B. <mic...@ge...> - 2010-08-20 10:58:57
|
Hi! You can easily use MooseFS to store any kind of content, including log files. Performance should not be affected. But it depends on the logging mechanism, if for writing every line the file is continuously opened, data appended and the file closed, for sure it is not perfect way to save logs. And you have to remember about one thing. Different clients cannot write at the same moment to the same file located in MooseFS. So if you have several httpd servers, let them save the logs under different filenames (eg. access-192.168.0.1.log, access-192.168.0.2.log, etc.) If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Anh K. Huynh [mailto:ky...@vi...] Sent: Thursday, August 12, 2010 12:21 PM To: moo...@li... Subject: [Moosefs-users] Moosefs for logs Hi, I am a moosefs newbie. I am using EC2 instances and I intended to build a moosefs system to share EBS disks between instances. My question is that: can I use moosefs for logs? My applications (web server, applications) need to write to logs files, but I don't know if there's any performance problem when logs are written to moosefs' disk. Thank you for your helps, Regards, -- Anh Ky Huynh ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2010-08-20 10:43:50
|
Hi Stas! Ad. 1) Yes. When one copy of a chunk is located on a computer from which comes request to return all locations of the file (both at read and write) then this locations is sent as first. In case of reading client chooses this location as a preferred one, in case of writing the location would be at the first place in a chain of connections. Ad. 2) No. It is on our roadmap but now we cannot tell when this functionality would be implemented. If you need to increase performance you can have a look here: http://www.moosefs.org/moosefs-faq.html#mtu. Link Aggregation really helps. Regards Michal From: Stas Oskin [mailto:sta...@gm...] Sent: Sunday, August 15, 2010 9:55 PM To: moosefs-users Subject: [Moosefs-users] Local writing and rack awareness? Hi. I have couple of questions I wanted to ask about MFS: 1) In case the MFS chunk-server writes data, does it recognizes this and writes to itself first, and also reads from itself first, in order to speed-up operations? 2) Also, does MFS support rack awareness, where it replicates the files within same sub-net first, and only moves later to other sub-nets? Regards. |
From: Steve <st...@bo...> - 2010-08-19 15:57:15
|
How or why would all chunk server be down unless your taking the system down Its not immediately important a chunkserver is down if you have sufficient of them, your files are still available and duplicated That's the beauty of MFS -------Original Message------- From: Alexander Akhobadze Date: 08/19/10 14:27:14 To: moo...@li... Subject: Re: [Moosefs-users] MFS Chunk DHCP OK. I see... May be it will be a good idea to inform MFS Clients? I mean that when all chunkserver storing some requested by client file is down on the client I see a great slowdown. I think in this case will be more convenient to immediately return en error code to client and not to make it to wait. Alexander Akhobadze ====================================================== If a chunkserver is down the master server would inform you (the admin) - for sure in the CGI Monitor or maybe by email. It is not yet implemented so it is difficult to tell exactly what options would be available. Regards Michal -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Thursday, August 19, 2010 10:46 AM Subject: Re: [Moosefs-users] MFS Chunk DHCP Thank you for your reply. Now it is clear that the idea using DHCP is a bad idea. But it is not completely clear what is "chunkserver awareness". You write "...master server would inform about it". Inform who ? wbr Alexander Akhobadze ====================================================== At this moment it is possible to assign (any) IP address to chunk server via DHCP. But we think about introducing "chunkserver awareness" so that the master server remembers which chunkservers got connected to it and in case any of them gets disconnected, the master server would inform about it. Then, we would need to know IP addresses of the chunkservers and the IP addresses should not be changed when chunkserver is restarted. In such a case you could configure DHCP that it gives the same IP address to the machine based on its MAC address. If you need any further assistance please let us know. -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, August 18, 2010 4:32 PM To: moo...@li... Subject: [Moosefs-users] MFS Chunk DHCP I wonder if it is possible to assign IP address to chunk servers via DHCP ? Will it be in result of data loss while chunk servers at next boot gets a different address ? ----------------------------------------------------------------------------- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <akh...@ri...> - 2010-08-19 13:26:39
|
OK. I see... May be it will be a good idea to inform MFS Clients? I mean that when all chunkserver storing some requested by client file is down on the client I see a great slowdown. I think in this case will be more convenient to immediately return en error code to client and not to make it to wait. Alexander Akhobadze ====================================================== If a chunkserver is down the master server would inform you (the admin) - for sure in the CGI Monitor or maybe by email. It is not yet implemented so it is difficult to tell exactly what options would be available. Regards Michal -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Thursday, August 19, 2010 10:46 AM Subject: Re: [Moosefs-users] MFS Chunk DHCP Thank you for your reply. Now it is clear that the idea using DHCP is a bad idea. But it is not completely clear what is "chunkserver awareness". You write "...master server would inform about it". Inform who ? wbr Alexander Akhobadze ====================================================== At this moment it is possible to assign (any) IP address to chunk server via DHCP. But we think about introducing "chunkserver awareness" so that the master server remembers which chunkservers got connected to it and in case any of them gets disconnected, the master server would inform about it. Then, we would need to know IP addresses of the chunkservers and the IP addresses should not be changed when chunkserver is restarted. In such a case you could configure DHCP that it gives the same IP address to the machine based on its MAC address. If you need any further assistance please let us know. -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, August 18, 2010 4:32 PM To: moo...@li... Subject: [Moosefs-users] MFS Chunk DHCP I wonder if it is possible to assign IP address to chunk servers via DHCP ? Will it be in result of data loss while chunk servers at next boot gets a different address ? |
From: Michał B. <mic...@ge...> - 2010-08-19 08:58:14
|
If a chunkserver is down the master server would inform you (the admin) - for sure in the CGI Monitor or maybe by email. It is not yet implemented so it is difficult to tell exactly what options would be available. Regards Michal -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Thursday, August 19, 2010 10:46 AM To: Michał Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] MFS Chunk DHCP Thank you for your reply. Now it is clear that the idea using DHCP is a bad idea. But it is not completely clear what is "chunkserver awareness". You write "...master server would inform about it". Inform who ? wbr Alexander Akhobadze ====================================================== Вы писали 19 августа 2010 г., 12:31:20: ====================================================== At this moment it is possible to assign (any) IP address to chunk server via DHCP. But we think about introducing "chunkserver awareness" so that the master server remembers which chunkservers got connected to it and in case any of them gets disconnected, the master server would inform about it. Then, we would need to know IP addresses of the chunkservers and the IP addresses should not be changed when chunkserver is restarted. In such a case you could configure DHCP that it gives the same IP address to the machine based on its MAC address. If you need any further assistance please let us know. -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, August 18, 2010 4:32 PM To: moo...@li... Subject: [Moosefs-users] MFS Chunk DHCP I wonder if it is possible to assign IP address to chunk servers via DHCP ? Will it be in result of data loss while chunk servers at next boot gets a different address ? ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <akh...@ri...> - 2010-08-19 08:45:56
|
Thank you for your reply. Now it is clear that the idea using DHCP is a bad idea. But it is not completely clear what is "chunkserver awareness". You write "...master server would inform about it". Inform who ? wbr Alexander Akhobadze ====================================================== Вы писали 19 августа 2010 г., 12:31:20: ====================================================== At this moment it is possible to assign (any) IP address to chunk server via DHCP. But we think about introducing "chunkserver awareness" so that the master server remembers which chunkservers got connected to it and in case any of them gets disconnected, the master server would inform about it. Then, we would need to know IP addresses of the chunkservers and the IP addresses should not be changed when chunkserver is restarted. In such a case you could configure DHCP that it gives the same IP address to the machine based on its MAC address. If you need any further assistance please let us know. -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, August 18, 2010 4:32 PM To: moo...@li... Subject: [Moosefs-users] MFS Chunk DHCP I wonder if it is possible to assign IP address to chunk servers via DHCP ? Will it be in result of data loss while chunk servers at next boot gets a different address ? |
From: Michał B. <mic...@ge...> - 2010-08-19 08:31:37
|
At this moment it is possible to assign (any) IP address to chunk server via DHCP. But we think about introducing "chunkserver awareness" so that the master server remembers which chunkservers got connected to it and in case any of them gets disconnected, the master server would inform about it. Then, we would need to know IP addresses of the chunkservers and the IP addresses should not be changed when chunkserver is restarted. In such a case you could configure DHCP that it gives the same IP address to the machine based on its MAC address. If you need any further assistance please let us know. Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Alexander Akhobadze [mailto:akh...@ri...] Sent: Wednesday, August 18, 2010 4:32 PM To: moo...@li... Subject: [Moosefs-users] MFS Chunk DHCP Hi! I wonder if it is possible to assign IP address to chunk servers via DHCP ? Will it be in result of data loss while chunk servers at next boot gets a different address ? wbr Alexander Akhobadze ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Alexander A. <akh...@ri...> - 2010-08-18 14:47:12
|
Hi! I wonder if it is possible to assign IP address to chunk servers via DHCP ? Will it be in result of data loss while chunk servers at next boot gets a different address ? wbr Alexander Akhobadze |
From: Laurent W. <lw...@hy...> - 2010-08-18 09:44:02
|
On Mon, 16 Aug 2010 15:21:04 +0300 Stas Oskin <sta...@gm...> wrote: > > > > The answer is available on the website :) > > > > This one: > http://www.moosefs.org/mini-howtos.html#upgrade-to-1.6 No. It's written in new versions availability announcements. You don't upgrade from 1.5.x to 1.6.y, do you ? > > > > Which one do you use ? What's your boxes status about rpmforge ? I > > can't answer with so few details. > > > > > From your RPM's to ones at RPMForge. First, add rpmforge repo to your install, see http://rpmrepo.org/RPMforge/Using. As .cfg files are not from the base package, they won't be deleted/overwritten, etc. First, I advise you to disable rpmforge by default (it owns some packages already provided by base and update repo, in different versions, and I guess you don't want them by default). Edit /etc/yum.repos.d/rpmforge.repo and change enabled line. then, yum update mfs --enablerepo=rpmforge should be enough, and restart the service. Please read yum info mfs, yum info mfs-cgi, yum info mfs-client to know precisely which package you know for a given machine. HTH, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Laurent W. <lw...@hy...> - 2010-08-16 09:36:50
|
On Mon, 16 Aug 2010 12:08:13 +0300 Stas Oskin <sta...@gm...> wrote: > Hi. > > What is the best way and safest way to upgrade MFS to latest version? The answer is available on the website :) > > Also, what would be the best way to switch from current RPM's I use, to > RPMForge repo? Which one do you use ? What's your boxes status about rpmforge ? I can't answer with so few details. Regards, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michał B. <mic...@ge...> - 2010-08-16 09:24:11
|
Your point B is closer to the real writing process. But the client doesn’t wait with sending data. It sends new data before it receives confirmation of writing previous data. Only removing from write queue takes place after writing confirmation. The writing schema follows like this: send '1' | /- send '2' | | /- send '3' | | | /- remove '1' from buffer | | | | /- remove '2' from buffer v v v v v /- remove '3' from buffer Client: --+---+---+-------+---+---+------------ \1 \2 \3 /1 /2 /3 CS1: ----+111+222+333+---+---+-------------- \1 \2 \3/1 /2 /3 CS2: ------+111+222+333+--+----------------- \1 \2/1\3/2 /3 CS3: --------+111+222+333+------------------ \X - sending data of block 'X' /X - receiving status of block 'X' -XXX- - writing data of block 'X' For clarity we present here writing of only three blocks. So between goal=1 and goal=3 there is a constant difference independent of blocks quantity. There are of course some “chokes” during writes to the disks. For goal=1 we have only delays connected to chokes of one disk, but for goal=3 we have cumulated delays of all the disks. This is because each chunkserver writes in parallel only one block - so each write “choke” delays the whole process, independently of chunk server. Kind regards Michal Borychowski From: kuer ku [mailto:ku...@gm...] Sent: Wednesday, August 11, 2010 4:44 AM To: Michał Borychowski Subject: Re: [Moosefs-users] How fast can you copy files to your Moosefs ? Hi, Michal, With goal=3 data transmission looks like this: client <=> cs1 <=> cs2 <=> cs3 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In such situations, when client finish write operation ? A. when client finish writing to cs1, client can think it finish writing; and cs1 will issue a writing operation to cs2 asynchronously; and cs2 does it asynchronously too. In this case, when client finish writing, MFS have at least 1 copy of data, and MFS will try to replicate more copies asynchronously. B. client waiting write operation end util : cs1 finish writing, and cs2 finish writing, and cs3 finish writing. In this case, when client finish writing, MFS have 3 copies of data. Michal, do you mean case A ?? In case A, to set a higher goal does not decrease too much performance ( in client view). -- kuer 2010/8/9 Michał Borychowski <mic...@ge...> This is not a normal behavior. With goal=3 data transmission looks like this: client <=> cs1 <=> cs2 <=> cs3 (whereas order of chunkservers if different for different chunks) and normally should work with speeds similar to goal=1. For 100Mbit/s network we would expect 6-7MB/s). But please check if you have full-duplex enabled. If you only have half-duplex, speed can be substantially lower. As stated before, we definitely recommend 1Gbit/s. And please check if the existing network is for sure properly configured. Kind regards Michał Borychowski From: Chen, Alvin [mailto:alv...@in...] Sent: Thursday, July 29, 2010 3:57 AM To: Micha? Borychowski Cc: moo...@li... Subject: Re: [Moosefs-users] How fast can you copy files to your Moosefs ? I mean setting goal to 3. If setting goal to 1, the writing speed is 9MBytes/sec for 100Mbps networking, but if setting goal to 3, the speed is just 500KBytes/sec. For 100Mbps networking, the speed should reach 12.5Mbytes/sec, so if setting goal to 1, 9Mbytes/sec is reasonable, but if setting goal to 3, the speed should be around 3Mbytes/sec. By the way, I just use scp to copy 4GB data file to the mount folder. Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... From: mic...@ge... [mailto:mic...@ge...] Sent: Wednesday, July 28, 2010 7:45 PM To: Chen, Alvin Cc: moo...@li... Subject: RE: [Moosefs-users] How fast can you copy files to your Moosefs ? First of all if you want better performance you should use gigabit network. We have writes of about 20-30MiB/s (have a look here: http://www.moosefs.org/moosefs-faq.html#average). You can also have a look here: http://www.moosefs.org/moosefs-faq.html#mtu for some network tips. PS. Talking about 3 copies do you mean setting goal=3 or copying 3 files simultaneously? Kind regards Michal Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 From: Chen, Alvin [mailto:alv...@in...] Sent: Tuesday, July 27, 2010 10:52 AM To: moo...@li... Subject: [Moosefs-users] How fast can you copy files to your Moosefs ? Hi guys, I am a new user of moosefs. I have 3 chunk servers, one master server with 100Mbps network. And I just copy a 4GB files from one client machine to Moosefs, and the copying speed can reach 9MB/s if just one copy, but the copying speed is just 500KB/s if 3copies. How fast do your Moosefs can reach? Does anybody get better performance? Best regards,Alvin Chen ICFS Platform Engineering Solution Flex Services (CMMI Level 3, IQA2005, IQA2008), Greater Asia Region Intel Information Technology Tel. 010-82171960 inet.8-7581960 Email. alv...@in... ---------------------------------------------------------------------------- -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Stas O. <sta...@gm...> - 2010-08-16 09:08:40
|
Hi. What is the best way and safest way to upgrade MFS to latest version? Also, what would be the best way to switch from current RPM's I use, to RPMForge repo? Thanks. |
From: Stas O. <sta...@gm...> - 2010-08-16 09:04:49
|
Hi. hmmmm. master decides where data are to be written. Chunkservers read > or write data when they are commanded. Chunkservers do not, afaik, take > decisions on their own about read or write. > The idea is in case the application running on chunk master, it will write and read locally for improved performance. I think you're doing a mix-up between rack and subnet. you can > perfectly have the same subnet in several racks. > Anyway, rack awareness is not yet implemented. It's on the todo list as > far as I can remember. (can't verify as I can't access web for now). > You right - this specified on the chunk server level as additional ID. The idea is to have reads in same rack, but replications across racks. This increase performance and lowers Intra/Inter net bandwidth usage, while still providing required level of fault tolerance. These two features are being implemented in many DFS systems these days, and I think they will be a great addition to MFS as well. For example: http://kosmosfs.sourceforge.net/features.html " Rack-aware data placement: The chunk placement algorithm is rack-aware. Whereever possible, it places chunks on different racks. . . Local read optimization: When applications are run on the same nodes as chunkservers, the CloudStore client library contains an optimization for reading data locally. That is, if the chunk is stored on the same node as the one on which the application is executing, data is read from the local node. " Regards. |