You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Anh K. H. <ky...@vi...> - 2011-06-05 03:04:02
|
On Sat, 4 Jun 2011 15:05:50 +0000 Jean-Baptiste <jb...@jb...> wrote: > On Tue, May 31, 2011 at 2:13 PM, Anh K. Huynh <ky...@vi...> > wrote: > > > IMHO, Git allows developers to work *easily* on private branches, > > so we may not see any activities on the repository. BTW, this is > > my opinion; this isn't official reply from MooseFS team. > > Thank you for your answer. Is that possible to have an official > reply from the MooseFS about those private branches ? It could be > quite frustrating to develop something on the public trunk and > having a difficult merge time when a new release pops in the public > git repository. > > I'm not saying anything wrong about the MooseFS team here, i think > it's perfectly understandable to have non-public working repository, > but maybe it will encourage people to contribute if they can have an > eye on the work in progress =) BTW, the IRC channel #moosefs is quite active and open as I know. Feel to join it :) I love the guys there! -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: Anh K. H. <ky...@vi...> - 2011-06-05 02:49:03
|
On Sat, 4 Jun 2011 15:05:50 +0000 Jean-Baptiste <jb...@jb...> wrote: > On Tue, May 31, 2011 at 2:13 PM, Anh K. Huynh <ky...@vi...> > wrote: > > > IMHO, Git allows developers to work *easily* on private branches, > > so we may not see any activities on the repository. BTW, this is > > my opinion; this isn't official reply from MooseFS team. > > Thank you for your answer. Is that possible to have an official > reply from the MooseFS about those private branches ? It could be > quite frustrating to develop something on the public trunk and > having a difficult merge time when a new release pops in the public > git repository. > > I'm not saying anything wrong about the MooseFS team here, i think > it's perfectly understandable to have non-public working repository, > but maybe it will encourage people to contribute if they can have an > eye on the work in progress =) I think the best way is... to do something, don't wait. The source is open and any one can benefit from that, and any developers play with it, or they have to wait for the authors. Some months ago I started with very tiny things: add comments to MooseFS code, thinking that comments may help someone. Unfortunately, I permanently deleted my works when cleaning up my computer and my git repositories :( Now I focus on the Python script "mfscgi" which is used to monitor the MooseFS. I think I would rewrite it in a more flexible way, so that the notification system can use it directly. Please wait for my commit :P Or join me ! Just my two cents, -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: Jean-Baptiste <jb...@jb...> - 2011-06-04 15:20:20
|
On Tue, May 31, 2011 at 2:13 PM, Anh K. Huynh <ky...@vi...> wrote: > IMHO, Git allows developers to work *easily* on private branches, so we may not see any activities on the repository. BTW, this is my opinion; this isn't official reply from MooseFS team. Thank you for your answer. Is that possible to have an official reply from the MooseFS about those private branches ? It could be quite frustrating to develop something on the public trunk and having a difficult merge time when a new release pops in the public git repository. I'm not saying anything wrong about the MooseFS team here, i think it's perfectly understandable to have non-public working repository, but maybe it will encourage people to contribute if they can have an eye on the work in progress =) Jean-Baptiste |
From: lcfc2316 <lcf...@16...> - 2011-06-03 08:46:52
|
Hi supports, I have changed a mfsclient from "rw" to "ro" from file 'mfsexports.cfg", restart mfsmaster. But from WEB monitor ,it doesn't change ,still "rw".After I re-mount mfsclient, I get "ro". I have already mount the client to mfsmaster before I change . Is there any way that I don't need to re-mount mfsclient ? Version:1.6.20 OS:Red Hat Enterprise Linux Server release 5.3 X86_64(all three) Files System:ext3 Best regards! 2011-06-03 |
From: Tuukka L. <tlu...@gm...> - 2011-06-02 07:54:43
|
OK I put in place the file the dev sent me and can not see any data loss... I found the file in question the one I got in the error and it seems fine. The whole system is up and functioning. I run the system on a old desktop computer and a another PC I bought for $25 so the dev recommends making sure you have good memory, but I guess I am using whatever I got =) Aside from this error everything has been fine. Didn't run the memtest they recommended, but I would not count out memory errors. However I would like to understand the situation better mainly for what are my recourses. As WK articulated already had my metadata been completely corrupt would I have lost all my data? Would I lose just one file the one with the error? And can I fix this error myself? Thanks, Tuukka On Wed, Jun 1, 2011 at 3:39 PM, WK <wk...@bn...> wrote: > On 6/1/2011 2:30 AM, Michal Borychowski wrote: >> >> We think that this problem could be caused by your RAM in the master. We >> recommend using RAM with parity control. You can also run a test from >> http://www.memtest.org/ on your server and check your existing RAM. Of >> course, the bit could have been changed also on the motherboard level or CPU >> - which is much less probable. >> >> >> Also you can see in the log that file 7538 is located between 7553 and 7555: > > > So in a situation like this where the metadata is now corrupt. > > Is the problem fixable with only the loss of the one file? (and how does > one fix it). > > or is his entire MFS setup completely corrupt and he would need to have > had a backup? > > Can I assume that older archived versions of the metadata.mfs could be > used to recover most of the files. > > -bill > > ------------------------------------------------------------------------------ > Simplify data backup and recovery for your virtual environment with vRanger. > Installation's a snap, and flexible recovery options mean your data is safe, > secure and there when you need it. Data protection magic? > Nope - It's vRanger. Get your free trial download today. > http://p.sf.net/sfu/quest-sfdev2dev > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: WK <wk...@bn...> - 2011-06-01 22:39:46
|
On 6/1/2011 2:30 AM, Michal Borychowski wrote: > > We think that this problem could be caused by your RAM in the master. We > recommend using RAM with parity control. You can also run a test from > http://www.memtest.org/ on your server and check your existing RAM. Of > course, the bit could have been changed also on the motherboard level or CPU > - which is much less probable. > > > Also you can see in the log that file 7538 is located between 7553 and 7555: So in a situation like this where the metadata is now corrupt. Is the problem fixable with only the loss of the one file? (and how does one fix it). or is his entire MFS setup completely corrupt and he would need to have had a backup? Can I assume that older archived versions of the metadata.mfs could be used to recover most of the files. -bill |
From: Josef <pe...@p-...> - 2011-06-01 21:21:38
|
The network device with a moosefs master is eth1 and the master was running. But I found out, that there is a huge delay between the network is up and between it actually starts working. I forgot to mention, that it is a Xen PV guest, so I guess it takes some time before the Xen host adds the guest eth to its bridge. So I have solved it by a stupid init script, that waits 10 seconds and remounts all fuse filesystems: echo "Remounting for MFS" sleep 10 mount -a -t fuse Nicer solution would be to add a delay to ifup script of a network interface... Dne 6/1/11 10:55 AM, Michal Borychowski napsal(a): >> -----Original Message----- >> From: Josef [mailto:pe...@p-...] >> Sent: Tuesday, May 31, 2011 4:41 PM >> To: moo...@li... >> Subject: [Moosefs-users] automount in debian >> >> Hello, >> I'm having problems with automount under debian (squeeze). I have >> this line in my fstab: >> /opt/mfs/bin/mfsmount /mnt/mfs fuse >> mfsmaster=172.16.100.2,mfsport=9421,_netdev 0 0 >> >> during the startup debian finds out, that it is a net device, so calls >> if-ip.d/mountmfs, but I fails: >> >> Configuring network interfaces...if-up.d/mountnfs[eth0]: waiting for >> interface eth1 before doing NFS mounts ... (warning). > This message comes from Debian init scripts and means, that eth0 has been > initialized, but eth1 isn't yet, so mounting of network filesystems is > delayed until initialization of the remaining network interfaces. > > > >> can't connect to mfsmaster ("172.16.100.2":"9421") > It's mfsmount message, probably occurs after initialization of eth1. > > There are a few possible causes, e.g.: > - master is not running yet > - given host is not accessible yet (maybe there is some delay between > initializing network interface and network being accessible?) > > > Kind regards > Michal Borychowski > > > |
From: jose m. <let...@us...> - 2011-06-01 12:01:18
|
El lun, 30-05-2011 a las 10:09 +0200, Michal Borychowski escribió: > Hi! > > > > You often ask „how to do this” / “why can’t I do this in command line” > and every time we answer that sometime we’ll prepare a set of tools > called “mastertools” where you could do lots of useful tasks. And this > moment is slowly coming up :) > > > > We have thought of some tools but for sure you can give us more > suggestions: > > > > - mfschunkinfo chunkid [, chunkid ...] > > - mfsinodeinfo inode [, inode ...] > > - mfsinfo or mfsstats (?) – returning how many files, folders, missing > chunks, etc. there are > > - mfscsinfo – returning list of connected chunkservers > > - mfsmlinfo - returning list of connected metaloggerów > > - mfsclinfo - returning list of connected clients > > - mfshdinfo - returning list of connected hard drives > > * mfs(repair,erase,delete ...) (chunk or inode number) (/mount/point) , on any location (trash or cluster, .....) force .... * mfscsinfo, mfsmlinfo, mfsclinfo, mfshdinfo standardized outputs for alerting purposes (read_error, write_error, damaged .......) * mfslist (options) metadumpfile /mount/point > list.txt (list files on -c cluster with -g goal n in format "/mount/point/dir/dir/file") to apply to list various commands or other purposes -t (trash), -u (unavailable) ....... * massive operations with find, ls (mfsfileinfo, ..... is not reasonable with millions of files, or thousands of errors in a repair |
From: Michal B. <mic...@ge...> - 2011-06-01 09:30:31
|
Hi! We run several instances of MooseFS over 5 years already and have never seen an error like yours. There was a situation that one file was lacking and the other existing but without relation to anything. We added -i (ignore) flag to the mfsmetarestore and got this result: loading objects (files,directories,etc.) ... ok loading names ... loading edge: 7527,DSC01862.JPG->7554 error: child not found ok loading deletion timestamps ... ok checking filesystem consistency ... fschk: found lost inode: 7538 ok loading chunks data ... ok connecting files and chunks ... ok store metadata into file: ../../../Downloads/mfs/metadata.mfs Numbers of files differ exactly by one bit: >>> "%02X" % 7554 '1D82' >>> "%02X" % 7538 '1D72' We think that this problem could be caused by your RAM in the master. We recommend using RAM with parity control. You can also run a test from http://www.memtest.org/ on your server and check your existing RAM. Of course, the bit could have been changed also on the motherboard level or CPU - which is much less probable. Also you can see in the log that file 7538 is located between 7553 and 7555: -|i: 7549|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156861,m:1088897360,c:1302340549|t: 86400|l: 978749|c:(0000000000001B1B)|r:() -|i: 7550|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156866,m:1088897400,c:1302340549|t: 86400|l: 804362|c:(0000000000001B1C)|r:() -|i: 7551|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156869,m:1088897438,c:1302340549|t: 86400|l: 850289|c:(0000000000001B1D)|r:() -|i: 7552|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156873,m:1088897474,c:1302340549|t: 86400|l: 710445|c:(0000000000001B1E)|r:() -|i: 7553|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156876,m:1098246428,c:1302340549|t: 86400|l: 456633|c:(0000000000001B1F)|r:() -|i: 7538|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302154827,m:1088893918,c:1302340549|t: 86400|l: 848797|c:(0000000000001B10)|r:() -|i: 7555|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156878,m:1088897534,c:1302340549|t: 86400|l: 137858|c:(0000000000001B21)|r:() -|i: 7556|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156878,m:1088898128,c:1302340549|t: 86400|l: 805701|c:(0000000000001B22)|r:() -|i: 7557|#:2|e:0|m:0777|u: 65534|g: 65534|a:1302156880,m:1088898148,c:1302340549|t: 86400|l: 817717|c:(0000000000001B23)|r:() -|i: 7558|#:2|e:0|m:0777|u: 65534|g: 65534|a:1157861440,m:1088898162,c:1302340549|t: 86400|l: 852298|c:(0000000000001B24)|r:() -|i: 7559|#:2|e:0|m:0777|u: 65534|g: 65534|a:1157861440,m:1088898186,c:1302340549|t: 86400|l: 797550|c:(0000000000001B25)|r:() -|i: 7560|#:2|e:0|m:0777|u: 65534|g: 65534|a:1157861440,m:1088898530,c:1302340549|t: 86400|l: 764878|c:(0000000000001B26)|r:() Kind regards -Michal -----Original Message----- From: Tuukka Luolamo [mailto:tlu...@gm...] Sent: Monday, May 30, 2011 7:40 PM To: Michal Borychowski Subject: Re: [Moosefs-users] Problems after power failure Hello Michael, Attached are the files you requested. Let me know if you need anything else. Now getting the meta files fixed would be great, but also if there is a way to rebuild them from the chunk servers contents that would be a viable option for this system as I only have two servers in the cluster, one acting as the master and a chunkserver and the other acting as the metalogger and a second chunk server. I have the replication set to 2 so both have all the contents of the file system. Also when it went down I am pretty sure there was nothing being written to the servers. This is my home / test system so getting the data back is important, but the time it takes to recover it is not. Thanks, Tuukka 2011/5/30 Michal Borychowski <mic...@ge...>: > Hi! > > If you could send us your "metadata.mfs*" and "changelog*" files > (tar.gzipped) - we'll see what can be done about it. > > > Kind regards > Michał Borychowski > MooseFS Support Manager > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. > ul. Wołoska 7, 02-672 Warszawa > Budynek MARS, klatka D > Tel.: +4822 874-41-00 > Fax : +4822 874-41-01 > > > > -----Original Message----- > From: Tuukka Luolamo [mailto:tlu...@gm...] > Sent: Sunday, May 29, 2011 3:36 AM > To: moo...@li... > Subject: [Moosefs-users] Problems after power failure > > I had a power failure and both my master and meta logger went down > simultaneously. > > When I turned them back on the master process failed to start, so I > ran metarestore -a but got the following error: > > loading objects (files,directories,etc.) ... ok > loading names ... loading edge: 7527,DSC01862.JPG->7554 error: child not > found > error > can't read metadata from file: metadata.mfs.back > > So I went to the metalogger and got the same error. > > Now I am not sure what to try next. > > Any help would be appreciated. > > > Tuukka > > ---------------------------------------------------------------------------- > -- > vRanger cuts backup time in half-while increasing security. > With the market-leading solution for virtual backup and recovery, > you get blazing-fast, flexible, and affordable data protection. > Download your free trial now. > http://p.sf.net/sfu/quest-d2dcopy1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Michal B. <mic...@ge...> - 2011-06-01 08:55:21
|
> -----Original Message----- > From: Josef [mailto:pe...@p-...] > Sent: Tuesday, May 31, 2011 4:41 PM > To: moo...@li... > Subject: [Moosefs-users] automount in debian > > Hello, > I'm having problems with automount under debian (squeeze). I have > this line in my fstab: > /opt/mfs/bin/mfsmount /mnt/mfs fuse > mfsmaster=172.16.100.2,mfsport=9421,_netdev 0 0 > > during the startup debian finds out, that it is a net device, so calls > if-ip.d/mountmfs, but I fails: > > Configuring network interfaces...if-up.d/mountnfs[eth0]: waiting for > interface eth1 before doing NFS mounts ... (warning). This message comes from Debian init scripts and means, that eth0 has been initialized, but eth1 isn't yet, so mounting of network filesystems is delayed until initialization of the remaining network interfaces. > can't connect to mfsmaster ("172.16.100.2":"9421") It's mfsmount message, probably occurs after initialization of eth1. There are a few possible causes, e.g.: - master is not running yet - given host is not accessible yet (maybe there is some delay between initializing network interface and network being accessible?) Kind regards Michal Borychowski |
From: Josef <pe...@p-...> - 2011-05-31 14:59:42
|
Hello, I'm having problems with automount under debian (squeeze). I have this line in my fstab: /opt/mfs/bin/mfsmount /mnt/mfs fuse mfsmaster=172.16.100.2,mfsport=9421,_netdev 0 0 during the startup debian finds out, that it is a net device, so calls if-ip.d/mountmfs, but I fails: Configuring network interfaces...if-up.d/mountnfs[eth0]: waiting for interface eth1 before doing NFS mounts ... (warning). can't connect to mfsmaster ("172.16.100.2":"9421") If I call mount -a -t fuse after boot mfs is mounted correctly. |
From: Anh K. H. <ky...@vi...> - 2011-05-31 14:14:09
|
On Mon, 30 May 2011 19:01:41 +0000 Jean-Baptiste <jea...@gm...> wrote: > correct me if i'm wrong, but it seems that the current repository is > located on sourceforge: > http://sourceforge.net/projects/moosefs/develop. I saw an unofficial > github mirror (https://github.com/icy/moosefs). Actually this is my fork (for my personal patches). The mirror has moved to new address https://github.com/xmirror/moosefs > Is there any chance for it to become the official one ? Github > offers a greater flexibility for external contribution than > sourceforge, i think everybody agree on that. I agree with you that using Git will help MooseFS to get more contributors. I am going to write some scripts, to make repositories at "xmirror" up-to-date. Surely it's nice if there is any official mirror on Github. > By the way, is there a reason why there is no activity on the > repository since the last release ? Maybe are you working on a > private repository ? I think it could be nice to see the current > work on your current moosefs branch and motivate people to > contribute ! IMHO, Git allows developers to work *easily* on private branches, so we may not see any activities on the repository. BTW, this is my opinion; this isn't official reply from MooseFS team. Regards, -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: Ólafur Ó. <osv...@ne...> - 2011-05-31 13:47:04
|
Hi, Answers inline below. On 30.5.2011, at 21:04, Howie Chen wrote: > I saw many ppl have already implemented MooseFS as the backend storage for Xen. How do you mount MFS for it? Just create a flat file and create FS and LVM on it? We create a flat file, partition that and use LVM so that we can extend that with additional disk files if required. > Also I am concern about the security since every client connected to the central storage, what if one client got compromised, does it mean it can be easily to lose all data? That is a possibility, but I would imagine its the same with all central storage for VMs. > How about the storage for OpenVZ? OpenVZ container stores files within one folder on the server, it's so easy to hit 25 million files I believe. To solve this issue, according to some cases, is to add more memory to the master. Does it mean this is a soft limitation of MFS since not everybody can afford a server with 192GB memory? > > How's your read and write speed on the client? Mine can only reach around 40MB/s for writing, 68MB/s for reading. 2 chunks, 2 goals, 4 harddrives each server, 2 gigabit Nics bonded on a gigabit switch. Is it a normal rate? Last time I saw my friend perform dd command on one of his Exs VM, and he got 1GB/s read and write speed. His settings is pretty much the same to mine except he got 3 chunks and 6 harddrives for each server. Using dd I get over 1GB/s read speeds and a few hundred MB/s read speeds with files under 3GB reading/writing 1024 bytes at a time, over 2,5GB files the speed drops drastically untill I raise the bs parameter and then it goes back up. Just did a quick test using hdparm and seeker.py from https://github.com/fidlej/seeker on a 50GB disk within a VM, try the same test and see how it compares. root@cache0# seeker/seeker.py /dev/sdb Benchmarking /dev/sdb [50.00 GB] 10/0.01 = 985 seeks/second 1.01 ms random access time 100/0.09 = 1123 seeks/second 0.89 ms random access time 1000/0.93 = 1081 seeks/second 0.93 ms random access time 10000/9.25 = 1080 seeks/second 0.93 ms random access time 100000/91.41 = 1093 seeks/second 0.91 ms random access time root@cache0# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 464 MB in 3.01 seconds = 154.13 MB/sec Our MFS setup is 10 Chunkservers with 6x1TB SATA drives each, running Ubuntu from RAM with 2x100Mb/s network connections to the storage networks each with MTU at 9000. Each dom0 has a 1000Gb/s connection to the storage network. I did the same test on a SAN setup we have with Xen and got better results there: Raid10 SATA setup: /dev/xvdc: Timing buffered disk reads: 868 MB in 3.00 seconds = 289.33 MB/sec Benchmarking /dev/xvde [50.00 GB] 10/0.01 = 1504 seeks/second 0.66 ms random access time 100/0.02 = 6383 seeks/second 0.16 ms random access time 1000/0.19 = 5249 seeks/second 0.19 ms random access time 10000/1.71 = 5857 seeks/second 0.17 ms random access time 100000/15.32 = 6528 seeks/second 0.15 ms random access time /dev/xvde: Timing buffered disk reads: 878 MB in 3.00 seconds = 292.55 MB/sec > I would appreciate any response. Hope that helps. /Oli -- Ólafur Osvaldsson System Administrator Nethonnun ehf. e-mail: osv...@ne... phone: +354 517 3400 |
From: Alexander A. <akh...@ri...> - 2011-05-31 12:20:11
|
Hi All! I agree with Thomas! I would be grate. Alexander Akhobadze ====================================================== The tool that I would like to see would tell me from the command line what files are missing chunks. Normally I have to do some serious find magic or wait for it to show up in the web interface. |
From: Alexander A. <akh...@ri...> - 2011-05-31 12:20:09
|
Hi! IP v 6 by now is NOT actual at all ! :--) Alexander Akhobadze ====================================================== Hi That's right - it won't happen soon. Ipv6 standard was created in 1998 (!) and after 13 years it is not still very popular. When it happens, we'll for sure implement support for it in MooseFS. For now, if there are no other advantages, we won't focus on that. Kind regards Michal -----Original Message----- From: Ricardo J. Barberis [mailto:ric...@da...] Sent: Monday, May 30, 2011 8:25 PM To: moo...@li... Subject: Re: [Moosefs-users] IPv6 ? El Lunes 30 May 2011, Florent Bautista escribió: > Hi, > > Thank you for your answer. > > The question is not really what would be better in IPv6, but the fact is > that IPv4 *will* stop one day, and maybe soon. No, it won't! (not soon, at least). There will be a period where IPv4 and IPv6 will coexist, and IMHO that period will last 5 to 10 years (no hard evidence, just a hunch). Besides that, your LAN is your LAN, you can keep using IPv4 for as long as you want, and as long as your networking gear supports it Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------------------------------------------ vRanger cuts backup time in half-while increasing security. With the market-leading solution for virtual backup and recovery, you get blazing-fast, flexible, and affordable data protection. Download your free trial now. http://p.sf.net/sfu/quest-d2dcopy1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------------ Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Papp T. <to...@ma...> - 2011-05-31 10:03:08
|
On 05/31/2011 10:51 AM, Michal Borychowski wrote: > Hi! hi! > Hmmm... How many chunks do you have on this test machine? Maybe there are so > many many chunks that their processing causes timeouts - but this is rather > unprobable. /data/backup: inodes: 30Mi directories: 4.0Mi files: 26Mi chunks: 27Mi length: 67TiB size: 68TiB realsize: 68TiB I think, this is not really huge number, just a smaller backup server with dirvish. > Maybe you just have too little RAM and your swap is overused which causes > timeouts? More or less you should have about 12GB of RAM. I'm sure the RAM size should be bigger, I exptect it to work with smaller rate. > Regarding sizes of hardlinks - "fixing" it would be too demanding for the > CPU of the master. Similarly mfsmakesnapshot causes multiple counting of the > same data. I think I set rate limit for rsync, maybe helps... Thanks, tamas |
From: Michal B. <mic...@ge...> - 2011-05-31 09:10:37
|
Hi Laurent! Yes, we know about this (If I recall, we already talked something about this subject on our group before). Generally speaking there is no mechanism leveling out the occupation of the disks within a single chunkserver (to implement in the future), there is only a mechanism leveling out the occupation between the chunkservers. When you remove the 400 GiB disks, the system will eventually rebalance everything after some time. Best regards -Michal -----Original Message----- From: Laurent Wandrebeck [mailto:lw...@hy...] Sent: Wednesday, May 25, 2011 1:14 PM To: moo...@li... Subject: [Moosefs-users] corner case in disk for removal algorithm ? Hi there, I've got a chunkserver, say A, with 24 disks (JBOD). 12 have already been changed from 400 GiB to 2TiB. Load is finished. Now, I marked for removal the 12 400 GiB that were remaining. The other chunkservers are taking the chunks, but the 12 2 TiB disks on A are not at all taking the load. So A, considering only the 12 2TiB disks, is 90% full, while the other chunkservers are now at 96%. Could it be a forgotten corner case in chunks space load balance ? Thanks, -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Michal B. <mic...@ge...> - 2011-05-31 08:51:42
|
Hi! Hmmm... How many chunks do you have on this test machine? Maybe there are so many many chunks that their processing causes timeouts - but this is rather unprobable. Maybe you just have too little RAM and your swap is overused which causes timeouts? More or less you should have about 12GB of RAM. Regarding sizes of hardlinks - "fixing" it would be too demanding for the CPU of the master. Similarly mfsmakesnapshot causes multiple counting of the same data. Best regards -Michał -----Original Message----- From: Papp Tamas [mailto:to...@ma...] Sent: Monday, May 30, 2011 11:24 AM To: Michal Borychowski; moo...@li... Subject: Re: [Moosefs-users] timeout On 05/30/2011 11:13 AM, Michal Borychowski wrote: > Hi! > > Just start your chunkservers one by one in such a situation. But this > happens very rarely. In this case this is not the same situation. I have only one chunkserver which the same as the master server. It starts this behaviour after a day or two days uptime. > I don't see much differences in the sizes - what do you mean exactly? > /data/backup: > inodes: 33Mi > directories: 4.2Mi > files: 29Mi > chunks: 29Mi > length: 72TiB > size: 73TiB > realsize: 73TiB > > > /dev/sda6 10T 7.3T 2.8T 73% /mnt/mfschunk1 > mfsmaster:9421 10T 7.3T 2.8T 73% /data/backup msdirinfo shows the volume size 73T while df shows the real one, which is 7.3T, or do I misunderstand something? Thanks, tamas ---------------------------------------------------------------------------- -- vRanger cuts backup time in half-while increasing security. With the market-leading solution for virtual backup and recovery, you get blazing-fast, flexible, and affordable data protection. Download your free trial now. http://p.sf.net/sfu/quest-d2dcopy1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michal B. <mic...@ge...> - 2011-05-31 08:44:37
|
Hi That's right - it won't happen soon. Ipv6 standard was created in 1998 (!) and after 13 years it is not still very popular. When it happens, we'll for sure implement support for it in MooseFS. For now, if there are no other advantages, we won't focus on that. Kind regards Michal -----Original Message----- From: Ricardo J. Barberis [mailto:ric...@da...] Sent: Monday, May 30, 2011 8:25 PM To: moo...@li... Subject: Re: [Moosefs-users] IPv6 ? El Lunes 30 May 2011, Florent Bautista escribió: > Hi, > > Thank you for your answer. > > The question is not really what would be better in IPv6, but the fact is > that IPv4 *will* stop one day, and maybe soon. No, it won't! (not soon, at least). There will be a period where IPv4 and IPv6 will coexist, and IMHO that period will last 5 to 10 years (no hard evidence, just a hunch). Besides that, your LAN is your LAN, you can keep using IPv4 for as long as you want, and as long as your networking gear supports it :) Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------------------------------------------ vRanger cuts backup time in half-while increasing security. With the market-leading solution for virtual backup and recovery, you get blazing-fast, flexible, and affordable data protection. Download your free trial now. http://p.sf.net/sfu/quest-d2dcopy1 _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Howie C. <ho...@vp...> - 2011-05-30 22:01:12
|
Hi, there, I saw many ppl have already implemented MooseFS as the backend storage for Xen. How do you mount MFS for it? Just create a flat file and create FS and LVM on it? Also I am concern about the security since every client connected to the central storage, what if one client got compromised, does it mean it can be easily to lose all data? How about the storage for OpenVZ? OpenVZ container stores files within one folder on the server, it's so easy to hit 25 million files I believe. To solve this issue, according to some cases, is to add more memory to the master. Does it mean this is a soft limitation of MFS since not everybody can afford a server with 192GB memory? How's your read and write speed on the client? Mine can only reach around 40MB/s for writing, 68MB/s for reading. 2 chunks, 2 goals, 4 harddrives each server, 2 gigabit Nics bonded on a gigabit switch. Is it a normal rate? Last time I saw my friend perform dd command on one of his Exs VM, and he got 1GB/s read and write speed. His settings is pretty much the same to mine except he got 3 chunks and 6 harddrives for each server. I would appreciate any response. Thank you! |
From: Jean-Baptiste <jea...@gm...> - 2011-05-30 19:01:47
|
Hello everybody, correct me if i'm wrong, but it seems that the current repository is located on sourceforge: http://sourceforge.net/projects/moosefs/develop. I saw an unofficial github mirror (https://github.com/icy/moosefs). Is there any chance for it to become the official one ? Github offers a greater flexibility for external contribution than sourceforge, i think everybody agree on that. By the way, is there a reason why there is no activity on the repository since the last release ? Maybe are you working on a private repository ? I think it could be nice to see the current work on your current moosefs branch and motivate people to contribute ! Anyway, thank you for sharing moosefs with the world. Regards, Jean-Baptiste |
From: Ricardo J. B. <ric...@da...> - 2011-05-30 18:25:07
|
El Lunes 30 May 2011, Florent Bautista escribió: > Hi, > > Thank you for your answer. > > The question is not really what would be better in IPv6, but the fact is > that IPv4 *will* stop one day, and maybe soon. No, it won't! (not soon, at least). There will be a period where IPv4 and IPv6 will coexist, and IMHO that period will last 5 to 10 years (no hard evidence, just a hunch). Besides that, your LAN is your LAN, you can keep using IPv4 for as long as you want, and as long as your networking gear supports it :) Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: Thomas S H. <tha...@gm...> - 2011-05-30 14:58:20
|
The tool that I would like to see would tell me from the command line what files are missing chunks. Normally I have to do some serious find magic or wait for it to show up in the web interface. |
From: Davies L. <dav...@gm...> - 2011-05-30 14:46:27
|
A client API library is needed, then we could create web server module, or file system for Hadoop. The client based on FUSE do not scale well in some situation, or do not perform fast enough. 2011/5/30 Michal Borychowski <mic...@ge...>: > Hi! > > > > You often ask "how to do this" / "why can't I do this in command line" and > every time we answer that sometime we'll prepare a set of tools called > "mastertools" where you could do lots of useful tasks. And this moment is > slowly coming up :) > > > > We have thought of some tools but for sure you can give us more suggestions: > > > > - mfschunkinfo chunkid [, chunkid ...] > > - mfsinodeinfo inode [, inode ...] > > - mfsinfo or mfsstats (?) - returning how many files, folders, missing > chunks, etc. there are > > - mfscsinfo - returning list of connected chunkservers > > - mfsmlinfo - returning list of connected metaloggerów > > - mfsclinfo - returning list of connected clients > > - mfshdinfo - returning list of connected hard drives > > > > Do you think of any other useful tools which could be indluded in the > "mastertools" set? Please share your thoughts with us. > > > > Thanks! > > > > > > Best regards > > Michał Borychowski > > > > MooseFS Support Manager > > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > > Gemius S.A. > > ul. Wołoska 7, 02-672 Warszawa > > Budynek MARS, klatka D > > Tel.: +4822 874-41-00 > > Fax : +4822 874-41-01 > > > > > > ------------------------------------------------------------------------------ > vRanger cuts backup time in half-while increasing security. > With the market-leading solution for virtual backup and recovery, > you get blazing-fast, flexible, and affordable data protection. > Download your free trial now. > http://p.sf.net/sfu/quest-d2dcopy1 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- - Davies |
From: Florent B. <fl...@co...> - 2011-05-30 14:02:42
|
Hi everyone, In a test cluster, composed of 1 mfsmaster and 2 chunkservers, I have 413773 files in 412858 chunks. First, it is strange to have less chunks than files, but maybe it's because there are some empty files (I suppose). All files are goal=3 (but I have 2 chunkservers, so all chunks are under-goal, no problem!), and there is 412858 on both servers. What is very strange, is that those 412858 chunks are 16 GiB sized on the first server and 22 GiB sized on the second. Both are ext4 file systems, Ubuntu Desktop and Server 11.04. I verified with "du -sh" and the size are the same as showed by mfscgi server. All servers are 1.6.20 and "Filesystem check info" shows nothing special. All files where copied once, and not modified then (not a versioning problem). A few days ago, I added a third chunkserver. Some chunks where copied on it, but all files where not goal=3 yet (I deleted it before). How can you explain that situation ? Is that "normal" ? If you need other information, let me know! Thank you a lot -- Florent Bautista ------------------------------------------------------------------------ Ce message et ses éventuelles pièces jointes sont personnels, confidentiels et à l'usage exclusif de leur destinataire. Si vous n'êtes pas la personne à laquelle ce message est destiné, veuillez noter que vous avez reçu ce courriel par erreur et qu'il vous est strictement interdit d'utiliser, de diffuser, de transférer, d'imprimer ou de copier ce message. This e-mail and any attachments hereto are strictly personal, confidential and intended solely for the addressee. If you are not the intended recipient, be advised that you have received this email in error and that any use, dissemination, forwarding, printing, or copying of this message is strictly prohibited. ------------------------------------------------------------------------ 30440 Saint Laurent le Minier France *Compagnie pour des Prestations Internet* Téléphone : +33 (0)467 73 89 48 Télécopie : + 33 (0)9 59 48 06 27 Courriel : Fl...@Co... <mailto:fl...@co...> ------------------------------------------------------------------------ |