You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Anh K. H. <anh...@gm...> - 2012-03-24 06:58:23
|
On Fri, 23 Mar 2012 19:36:18 -0400 mARK bLOORE <mb...@gm...> wrote: > is there a description of what "rack awareness" means in mfs? i did a > simple test of the assumption (and hope) that files would be > distributed to have at least one copy in each server group, but that > did not happen. in fact the files i created started out with copies > randomly distributed among chunk servers, but ended up all in one > server group. > In the latest version 2.6.24, there is a support. You can check in "topology" configuration (distributed in the default installation of MFS 1.6.24). I've never tried but I think that's what you're looking for. There isn't similiar support in any previous version. Regards, -- "It is better for civilization to be going down the drain than to be coming up it." -- Henry Allen |
From: mARK b. <mb...@gm...> - 2012-03-23 23:36:45
|
is there a description of what "rack awareness" means in mfs? i did a simple test of the assumption (and hope) that files would be distributed to have at least one copy in each server group, but that did not happen. in fact the files i created started out with copies randomly distributed among chunk servers, but ended up all in one server group. -- mARK bLOORE <mb...@gm...> |
From: Allen, B. S <bs...@la...> - 2012-03-21 20:37:53
|
On Mar 21, 2012, at 12:09 PM, Atom Powers wrote: > On 03/21/2012 08:23 AM, Allen, Benjamin S wrote: >> To the MooseFS folks, I work in an industry where I cannot agree to: >> >> "To publish the name and trademark of the company, on behalf of which I >> act, on the MooseFS website or Core Technology website and to use the >> aforementioned name and trademark in order to present them to Core >> Technology's clients for marketing purposes only." > > Same here. All contracts that come across my desk must have this kind of > clause removed before I can sign them. As it stands, this clause will > prevent me from upgrading. The code is GPLv3. I don't believe the info request form is blocking your access to the code. For example just visit the download page with Javascript disabled. The download link works without issue. Also taking the link and downloading it with wget works just fine. > >> Also I would suggest you remove having to forcefully enter contact info >> to download. Typically this doesn't jive well with OSS folks. Make it >> voluntary. > > I am less decisive on this point. If MooseFS was moving to a "freemium" > model I would understand but as an OSS project it doesn't make sense. The question is will Core Technology scare aware more of the community and new comers, or gain more customers with asking for this customer info. Obviously their call not ours, but certainly seems a bit of an abrupt change. Ben |
From: Atom P. <ap...@di...> - 2012-03-21 18:10:03
|
On 03/21/2012 08:23 AM, Allen, Benjamin S wrote: > To the MooseFS folks, I work in an industry where I cannot agree to: > > "To publish the name and trademark of the company, on behalf of which I > act, on the MooseFS website or Core Technology website and to use the > aforementioned name and trademark in order to present them to Core > Technology's clients for marketing purposes only." Same here. All contracts that come across my desk must have this kind of clause removed before I can sign them. As it stands, this clause will prevent me from upgrading. > Also I would suggest you remove having to forcefully enter contact info > to download. Typically this doesn't jive well with OSS folks. Make it > voluntary. I am less decisive on this point. If MooseFS was moving to a "freemium" model I would understand but as an OSS project it doesn't make sense. -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Laurent W. <lw...@hy...> - 2012-03-21 17:59:17
|
Same here. And git repo isn't up to date. -- Laurent Wandrebeck HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com GPG fingerprint/Empreinte GPG: F5CA 37A4 6D03 A90C 7A1D 2A62 54E6 EF2C D17C F64C |
From: Allen, B. S <bs...@la...> - 2012-03-21 15:23:33
|
One can easily find the download link in the HTML source of the download page. Currently it appears to be - http://moosefs.org/tl_files/mfscode/mfs-1.6.24.tar.gz To the MooseFS folks, I work in an industry where I cannot agree to: "To publish the name and trademark of the company, on behalf of which I act, on the MooseFS website or Core Technology website and to use the aforementioned name and trademark in order to present them to Core Technology's clients for marketing purposes only." Please remove this clause. It tarnishes the MooseFS project. If I was new to the project, I would certainly be turned away by this alone. Also I would suggest you remove having to forcefully enter contact info to download. Typically this doesn't jive well with OSS folks. Make it voluntary. Ben On Mar 21, 2012, at 6:56 AM, Steve Thompson wrote: On Wed, 21 Mar 2012, Steve Wilson wrote: Related to this, I filled out the download information form three times yesterday and once today but haven't received any email yet with the download URL. Someone on the MooseFS chat site mentioned that the same thing happened to them when they tried it yesterday. Me too. Steve ------------------------------------------------------------------------------ This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve T. <sm...@cb...> - 2012-03-21 12:56:47
|
On Wed, 21 Mar 2012, Steve Wilson wrote: > Related to this, I filled out the download information form three times > yesterday and once today but haven't received any email yet with the > download URL. Someone on the MooseFS chat site mentioned that the same > thing happened to them when they tried it yesterday. Me too. Steve |
From: Steve W. <st...@pu...> - 2012-03-21 12:50:45
|
On 03/21/2012 05:48 AM, jose maria wrote: > * the download link of the new version of moosefs.org activates a window > for inserccion of personal information, the mail received later includes > the same link and activates the same process, this happens with firefox > not with explorer. Related to this, I filled out the download information form three times yesterday and once today but haven't received any email yet with the download URL. Someone on the MooseFS chat site mentioned that the same thing happened to them when they tried it yesterday. Steve |
From: ???????? ????????? ????????? <akh...@ri...> - 2012-03-21 12:31:40
|
Hi dear All ! What about FreeBSD port on version 1.6.24 ? >> being processed (ITP): >> >> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=619841 > > I have added a note with the address "Jakub Bogusz > <co...@mo...>" to the "Request for Packaging" - together >with a > comment that its exceptionally easy to build debian packages, since > everything is already in place. > > In the meantime - I have packages for 1.6.20 and 1.6.24 in our local > repository at > <http://itp.tugraz.at/Comp/debian/dists/squeeze/system/binary-amd64/> > <http://itp.tugraz.at/Comp/debian/dists/squeeze/system/binary-i386/> > > By > Andreas > -- > Andreas Hirczy <ah...@it...> > http://itp.tugraz.at/~ahi/ > Graz University of Technology phone: |
From: jose m. <let...@us...> - 2012-03-21 09:48:34
|
* the download link of the new version of moosefs.org activates a window for inserccion of personal information, the mail received later includes the same link and activates the same process, this happens with firefox not with explorer. |
From: Andreas H. <ah...@it...> - 2012-03-20 13:20:01
|
Frédéric Massot <fre...@ju...> writes: > Hi, > > Some of you will like to have a Debian package of MooseFS, a request is > being processed (ITP): > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=619841 I have added a note with the address "Jakub Bogusz <co...@mo...>" to the "Request for Packaging" - together with a comment that its exceptionally easy to build debian packages, since everything is already in place. In the meantime - I have packages for 1.6.20 and 1.6.24 in our local repository at <http://itp.tugraz.at/Comp/debian/dists/squeeze/system/binary-amd64/> <http://itp.tugraz.at/Comp/debian/dists/squeeze/system/binary-i386/> By Andreas -- Andreas Hirczy <ah...@it...> http://itp.tugraz.at/~ahi/ Graz University of Technology phone: +43/316/873- 8190 Institute of Theoretical and Computational Physics fax: +43/316/873-10 8190 Petersgasse 16, A-8010 Graz mobile: +43/664/859 23 57 |
From: Frédéric M. <fre...@ju...> - 2012-03-20 10:56:50
|
Hi, Some of you will like to have a Debian package of MooseFS, a request is being processed (ITP): http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=619841 Regards. -- ============================================== | FRÉDÉRIC MASSOT | | http://www.juliana-multimedia.com | | mailto:fre...@ju... | ===========================Debian=GNU/Linux=== |
From: Davies L. <dav...@gm...> - 2012-03-16 10:37:22
|
On Fri, Mar 16, 2012 at 5:08 PM, Quenten Grasso <QG...@on...> wrote: > Hi Ken,**** > > ** ** > > I found the writes are a little bit strange as well Jian on this mailing > list suggested I try using –o big_writes this improved these kind of tests > significantly.**** > > ** > big_writes is option of FUSE, not mfsmount, maybe it use larger buffer for write. The default write buffer of mfsmount is 128M, it's enough generally, maybe useful in benchmark. > > Eg: mount –t big_writes /mnt/moosefs –H mfsmaster**** > > ** ** > > I’m Still working on the reads/general performance.**** > > ** ** > > Regards,**** > > Quenten Grasso**** > > ** ** > > *From:* Ken [mailto:ken...@gm...] > *Sent:* Friday, 16 March 2012 4:38 PM > *To:* moo...@li... > *Subject:* [Moosefs-users] [Moosefs-user]ChunkServer performance > bottleneck**** > > ** ** > > Hi, all > > Recently I did some pressure test in MFS, it is very simple. > There are 4 CS(chunkserver) in this cluster, every CS contain 2 NICs of > 1000Mbps PCI and 10 hard disks of 1T. > > We send write request from 10 or more mounted node. I noticed that the > receive bandwith of CS never exceed 70MiB/s. > > On the other side, we create 2-3 netcat processes listening under same > node, and call "cat /dev/zero | nc ..." from other machines. The bandwith > reached 180MiB/s very easily. > > I think there are some bottleneck in code. > > At first, in CS, all action of socket call in only(?) main thread. Is this > a bottleneck? > > Second, I used some tools(mutrace) on CS, the result is not very clearly. > But I can found > the most busy lock is hashlock(hddspacemgr.c:234) and > the most longest waited lock is in jpool of masterconn.c. > > So, I think change hashlock to a read/write mutex maybe is the most easy > way to improve performance. > And then change folderlock as same. > > Is that make sense? Any feedback is appreciated. > > Thanks. > > Best Regards > -Ken > > **** > > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > -- - Davies |
From: Quenten G. <QG...@on...> - 2012-03-16 09:27:02
|
Hi Ken, I found the writes are a little bit strange as well Jian on this mailing list suggested I try using –o big_writes this improved these kind of tests significantly. Eg: mount –t big_writes /mnt/moosefs –H mfsmaster I’m Still working on the reads/general performance. Regards, Quenten Grasso From: Ken [mailto:ken...@gm...] Sent: Friday, 16 March 2012 4:38 PM To: moo...@li... Subject: [Moosefs-users] [Moosefs-user]ChunkServer performance bottleneck Hi, all Recently I did some pressure test in MFS, it is very simple. There are 4 CS(chunkserver) in this cluster, every CS contain 2 NICs of 1000Mbps PCI and 10 hard disks of 1T. We send write request from 10 or more mounted node. I noticed that the receive bandwith of CS never exceed 70MiB/s. On the other side, we create 2-3 netcat processes listening under same node, and call "cat /dev/zero | nc ..." from other machines. The bandwith reached 180MiB/s very easily. I think there are some bottleneck in code. At first, in CS, all action of socket call in only(?) main thread. Is this a bottleneck? Second, I used some tools(mutrace) on CS, the result is not very clearly. But I can found the most busy lock is hashlock(hddspacemgr.c:234) and the most longest waited lock is in jpool of masterconn.c. So, I think change hashlock to a read/write mutex maybe is the most easy way to improve performance. And then change folderlock as same. Is that make sense? Any feedback is appreciated. Thanks. Best Regards -Ken |
From: Ken <ken...@gm...> - 2012-03-16 06:38:54
|
Hi, all Recently I did some pressure test in MFS, it is very simple. There are 4 CS(chunkserver) in this cluster, every CS contain 2 NICs of 1000Mbps PCI and 10 hard disks of 1T. We send write request from 10 or more mounted node. I noticed that the receive bandwith of CS never exceed 70MiB/s. On the other side, we create 2-3 netcat processes listening under same node, and call "cat /dev/zero | nc ..." from other machines. The bandwith reached 180MiB/s very easily. I think there are some bottleneck in code. At first, in CS, all action of socket call in only(?) main thread. Is this a bottleneck? Second, I used some tools(mutrace) on CS, the result is not very clearly. But I can found the most busy lock is hashlock(hddspacemgr.c:234) and the most longest waited lock is in jpool of masterconn.c. So, I think change hashlock to a read/write mutex maybe is the most easy way to improve performance. And then change folderlock as same. Is that make sense? Any feedback is appreciated. Thanks. Best Regards -Ken |
From: 崔赢 <cui...@gm...> - 2012-03-16 03:34:48
|
From: Robert S. <rsa...@ne...> - 2012-03-16 00:47:48
|
I took the new 1.6.24 and tested it in a scenario where I thought it would be useful to me. The idea was to use it as a method to distribute content in an HPC environment. I built a small 3 machine cluster. Each with a Xeon E3 based CPU, 4 GB RAM and a single 2 TB drive. I copied a few hundred thousand files onto each system that used a few hundred gigabytes. Each of the 3 machines were a chunk server and also mounted the file system locally. Each of them also had a local copy of the data. MooseFS was configured with a replication target of 3. An interesting note: It does not seem to be possible to set it higher than 9. I did quite a few tests. If I processed 60,000 files locally or on MooseFS on only one machine then MooseFS performed virtually the same and sometimes better than local storage. If however all three machines processed files simultaneously then the performance was a bit disappointing. The 60,000 files were significantly larger than the size of RAM on the systems so caching was ineffective. Each machine processed its own set of files with no overlap between the machines. Processing is a read-only operation on the file system. File reads are random per file. The tests ran on all 3 machines at the same time. The file sizes varied from hundreds of bytes to hundreds of megabytes in size. For three tests, each utilizing 4 threads and 1 process per machine to process the data I got the following times using the local disk. Machine1: 411 seconds 428 seconds 429 seconds Machine2: 402 seconds 407 seconds 408 seconds Machine3: 412 seconds 406 seconds 420 seconds For three tests, each utilizing 4 threads and 1 process per machine to process the data I got the following times using MooseFS. Machine1: 1649 seconds 1347 seconds 1266 seconds Machine2: 1951 seconds 1631 seconds 1213 seconds Machine3: 1852 seconds 1543 seconds 1257 seconds This is not a synthetic test. This is a real world test using real data and real processing. I chose 4 threads as the performance gains on a quad core CPU is insignificant if you use more threads. But the memory use becomes problematic given the limited RAM per machine. Obviously any recommendations on how to improve this and hopefully get MooseFS as close as possible to using local disk would be appreciated. The good news for me is that everything seemed stable. I really like the new web interface. During the first part of the tests I had another 2 machines suffering drive failures and one machine had a NIC card that acted a bit strange. The 3 machines I used for the test was stable and showed no issues. During all of these failures the file system was pretty stable. Robert Sandilands |
From: Michał B. <mic...@co...> - 2012-03-15 06:48:07
|
Hi! Finally, after quite a long break, we released next stable version - 1.6.24. Most important changes include: - There is no more filesize limit of 2TiB, now a file can occupy up to 128PiB - Chunkservers do not check attributes of every chunk during initialization which speeds up starting of CS - Added 'test' command - checks if process is running and returns its PID - Added lockfile/pidfile and actions such as 'start', 'stop','restart' and 'test' for mfscgiserv - Added simple net topology support ("rack awareness") - please have a look at mfstopology.cfg - In case of an error (including a case of no space left on master hdd) master server saves metadata file to alternative locations - Added hidden files '.oplog' and '.ophistory' with detailed information about current/historical operations performed by mfsmount - it is possible to read pid/uid/gid and operation type on the file made by the client - Now a password for mounting can be kept in mfsmount.cfg (instead of fstab, which was readable by everyone) - Introduced symlink cache on client (mfsmount) side - Fixed problem with frequent connections to the master server. Now it is not necessary to establish new connection while creating many snapshots or using mfstools (proxy in mount for mfstools) IMPORTANT: Metadata format has not changed, but if you start to have files bigger than 2TiB it won't be possible to downgrade to 1.6.20. The recommended upgrade order is as usual: mfsmaster, then metaloggers, then chunkservers and at last client machines. We have also updated our roadmap: http://www.moosefs.org/roadmap.html and as you can read we are going to have one release every month! MooseFS has now worldwide professional technical support on the enterprise level which is provided by our company Core Technology. You can find all necessary details on the website at http://www.coretechnology.pl/support.php You can download the latest version at http://www.moosefs.org/download.html webpage. More information about this release is available here: http://moosefs.org/news-reader/items/moose-file-system-v-1624-released.html For the social guys we have also some special links to join/like: http://www.linkedin.com/groups?home= <http://www.linkedin.com/groups?home=&gid=4083088&trk=anet_ug_hm> &gid=4083088&trk=anet_ug_hm https://www.facebook.com/moosefs https://twitter.com/#!/MooseFS PS. A Russian entry on MooseFS appeared lately at Wikipedia: http://ru.wikipedia.org/wiki/Moose_File_System Thanks for your great support! Kind regards Michal Borychowski MooseFS Support Manager |
From: Wang J. <jia...@re...> - 2012-03-13 11:39:18
|
于 2012/3/13 18:58, Quenten Grasso 写道: > > Hi Jian & Moosefs-users > > I tried mounting the mfsshare on the 6 chunk servers and did a dd test on the 6 Servers at the same time and the results are below, > > “dd if=/dev/zero of=ddfile bs=1000M count=4” > > Server 1, 48.3mb/s > > Server 2, 40.3mb/s > > Server 3, 60.4mb/s > > Server 4, 75.4mb/s > > Server 5, 67.2mb/s > > Server 6, 53.7mb/s > > That’s a total of 345mb/s > It's ok. Some times two or more instances write to the same disk of the same server and lead to contention, depending on timing. You can run the test for several times and see the variant. You would run modestly more instances on more 'mount' (6*3 is a good try) to get "Full speed". It should be below 6*2*60 to 6*2*90 mbps range. You have 6*2 disks and you should at least use 12 sequential writers to saturate the IO bandwidth. The network bandwidth is enough for this test. > Interestingly they vary from 40-75mb/s which I find strange. Next I created another folder and set the goal of 3, and ran the same test again > > “dd if=/dev/zero of=ddfile bs=1000M count=4” > > Server 1, 17.6mb/s > > Server 2, 15.4mb/s > > Server 3, 17.2mb/s > > Server 4, 75.4mb/s > > Server 5, 16.3mb/s > > Server 6, 16.4mb/s > > That’s a total of 158.3mb/s and seems strange the one at 75mb/s? > You should notice that these numbers*3 are close to the number in the goal=1 test (except the 75mb/s one). If you are lucky enough, you should get more aggregated throughput. Run the test for several times. The 75.4mb/s number is errnous, check your test again. Actually, you read the 17.6mb/s number but under the hood, the data is written 3 times. These 3 write are sequential so the speed is less than (a disk's IO throughput)/3. For goal=3, the networking can be but not neccessarily be bottleneck. It depends. BTW: you run dd with of=ddfileX where X = 1 to 6? PS: May I suggest you run without count=4 or with a very big count? because I don't think you start these dd instance at the same time. PPS: Try use kernel above 2.6.26 and mfsmount with -o big_writes. > So anyone have any tips for tuning? > > I’ll try creating a software Raid 0 for the 6 box’s next and see if this improves things I guess. > > Regards, > > Quenten > > *From:*Wang Jian [mailto:jia...@re...] > *Sent:* Tuesday, 13 March 2012 6:55 PM > *To:* Quenten Grasso > *Cc:* 'moo...@li...' > *Subject:* Re: [Moosefs-users] Data Writes > > > 于 2012/3/13 15:36, Quenten Grasso 写道: > > Hi All, > > I’m currently doing some testing with mfs, 1.6.20, and I have a Ubuntu 12.04 KVM server with our mfs share mounted /mnt/moosefs > > My MFS setup is currently > > 6 x Servers Dual Quad Core AMD 2.xGhz with 2 x SATA 7200RPM Hard Disks each formatted with XFS file system connected via 2 x LACP/Bonded 1 GBE. > > 1 x Metadata server which is also the KVM server. > > All of the drives I’ve completed a sequential write dd test I was able to achieve above 60mb/s to 90mb/s each drive doing a 8Gb Seq Write. “dd if=/dev/zero of=ddfile bs=1000M count=8” > > As Moose writes in 64MB Chunks imagine the speed of each disk should be more the adequate for this test. > > So my testing so far in my test folder I have set has a goal of 1 and doing this I am able to achieve around 58mb/s – 65mb/s also using a Goal of 2 or 3 is very similar results. > > > I would suggest you run multiple dd instances. > > Sequential write means at any time, a single dd instance is writing to a single chunk (and its replica) thus hits a single disk within a single server (and replica servers). > > > Essentially what I’m seeing is the data flows stop and start when looking at the network traffic across all of the machines instead of it being a even flow which I expect is why my speeds are not > what I would be expecting, > > Setup overview, > > We have 12Gbit of Network Throughput into 6 Servers and 12 disks of worst case avg of 60MB/s per disk which is around 720MB/s in the disks alone. Assuming no replicas I would expect the bottle neck > in this case to be the network due to the servers have 2Gbit trunk but maximum speed per transfer is technically 1GBE. However I’m not seeing the bottle necks in the places I would expect them the > chunk server seems to be waiting for data and the metadata server is pretty much sleeping from a cpu/io perspective and the test servers network card TX ‘s/Writes are all over the place.. > > I have tried jumbo frames as well and no significant improvements were found our test switch is a Dell Power Connect 6248 48Port Gbit Ethernet Switch. > > I’ve also completed network speed tests to each of the chunk servers from the test server/metadata server and I was able to achieve 890-940mbit/sec full duplex. > > Any ideas? > > Thanks, > > Quenten > > > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > > > > > _______________________________________________ > moosefs-users mailing list > moo...@li... <mailto:moo...@li...> > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Quenten G. <QG...@on...> - 2012-03-13 10:59:29
|
Hi Jian & Moosefs-users I tried mounting the mfsshare on the 6 chunk servers and did a dd test on the 6 Servers at the same time and the results are below, “dd if=/dev/zero of=ddfile bs=1000M count=4” Server 1, 48.3mb/s Server 2, 40.3mb/s Server 3, 60.4mb/s Server 4, 75.4mb/s Server 5, 67.2mb/s Server 6, 53.7mb/s That’s a total of 345mb/s Interestingly they vary from 40-75mb/s which I find strange. Next I created another folder and set the goal of 3, and ran the same test again “dd if=/dev/zero of=ddfile bs=1000M count=4” Server 1, 17.6mb/s Server 2, 15.4mb/s Server 3, 17.2mb/s Server 4, 75.4mb/s Server 5, 16.3mb/s Server 6, 16.4mb/s That’s a total of 158.3mb/s and seems strange the one at 75mb/s? So anyone have any tips for tuning? I’ll try creating a software Raid 0 for the 6 box’s next and see if this improves things I guess. Regards, Quenten From: Wang Jian [mailto:jia...@re...] Sent: Tuesday, 13 March 2012 6:55 PM To: Quenten Grasso Cc: 'moo...@li...' Subject: Re: [Moosefs-users] Data Writes 于 2012/3/13 15:36, Quenten Grasso 写道: Hi All, I’m currently doing some testing with mfs, 1.6.20, and I have a Ubuntu 12.04 KVM server with our mfs share mounted /mnt/moosefs My MFS setup is currently 6 x Servers Dual Quad Core AMD 2.xGhz with 2 x SATA 7200RPM Hard Disks each formatted with XFS file system connected via 2 x LACP/Bonded 1 GBE. 1 x Metadata server which is also the KVM server. All of the drives I’ve completed a sequential write dd test I was able to achieve above 60mb/s to 90mb/s each drive doing a 8Gb Seq Write. “dd if=/dev/zero of=ddfile bs=1000M count=8” As Moose writes in 64MB Chunks imagine the speed of each disk should be more the adequate for this test. So my testing so far in my test folder I have set has a goal of 1 and doing this I am able to achieve around 58mb/s – 65mb/s also using a Goal of 2 or 3 is very similar results. I would suggest you run multiple dd instances. Sequential write means at any time, a single dd instance is writing to a single chunk (and its replica) thus hits a single disk within a single server (and replica servers). Essentially what I’m seeing is the data flows stop and start when looking at the network traffic across all of the machines instead of it being a even flow which I expect is why my speeds are not what I would be expecting, Setup overview, We have 12Gbit of Network Throughput into 6 Servers and 12 disks of worst case avg of 60MB/s per disk which is around 720MB/s in the disks alone. Assuming no replicas I would expect the bottle neck in this case to be the network due to the servers have 2Gbit trunk but maximum speed per transfer is technically 1GBE. However I’m not seeing the bottle necks in the places I would expect them the chunk server seems to be waiting for data and the metadata server is pretty much sleeping from a cpu/io perspective and the test servers network card TX ‘s/Writes are all over the place.. I have tried jumbo frames as well and no significant improvements were found our test switch is a Dell Power Connect 6248 48Port Gbit Ethernet Switch. I’ve also completed network speed tests to each of the chunk servers from the test server/metadata server and I was able to achieve 890-940mbit/sec full duplex. Any ideas? Thanks, Quenten ------------------------------------------------------------------------------ Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d _______________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Wang J. <jia...@re...> - 2012-03-13 08:55:31
|
于 2012/3/13 15:36, Quenten Grasso 写道: > > Hi All, > > I’m currently doing some testing with mfs, 1.6.20, and I have a Ubuntu 12.04 KVM server with our mfs share mounted /mnt/moosefs > > My MFS setup is currently > > 6 x Servers Dual Quad Core AMD 2.xGhz with 2 x SATA 7200RPM Hard Disks each formatted with XFS file system connected via 2 x LACP/Bonded 1 GBE. > > 1 x Metadata server which is also the KVM server. > > All of the drives I’ve completed a sequential write dd test I was able to achieve above 60mb/s to 90mb/s each drive doing a 8Gb Seq Write. “dd if=/dev/zero of=ddfile bs=1000M count=8” > > As Moose writes in 64MB Chunks imagine the speed of each disk should be more the adequate for this test. > > So my testing so far in my test folder I have set has a goal of 1 and doing this I am able to achieve around 58mb/s – 65mb/s also using a Goal of 2 or 3 is very similar results. > I would suggest you run multiple dd instances. Sequential write means at any time, a single dd instance is writing to a single chunk (and its replica) thus hits a single disk within a single server (and replica servers). > Essentially what I’m seeing is the data flows stop and start when looking at the network traffic across all of the machines instead of it being a even flow which I expect is why my speeds are not > what I would be expecting, > > Setup overview, > > We have 12Gbit of Network Throughput into 6 Servers and 12 disks of worst case avg of 60MB/s per disk which is around 720MB/s in the disks alone. Assuming no replicas I would expect the bottle neck > in this case to be the network due to the servers have 2Gbit trunk but maximum speed per transfer is technically 1GBE. However I’m not seeing the bottle necks in the places I would expect them the > chunk server seems to be waiting for data and the metadata server is pretty much sleeping from a cpu/io perspective and the test servers network card TX ‘s/Writes are all over the place.. > > I have tried jumbo frames as well and no significant improvements were found our test switch is a Dell Power Connect 6248 48Port Gbit Ethernet Switch. > > I’ve also completed network speed tests to each of the chunk servers from the test server/metadata server and I was able to achieve 890-940mbit/sec full duplex. > > Any ideas? > > Thanks, > > Quenten > > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Quenten G. <QG...@on...> - 2012-03-13 07:37:33
|
Hi All, I'm currently doing some testing with mfs, 1.6.20, and I have a Ubuntu 12.04 KVM server with our mfs share mounted /mnt/moosefs My MFS setup is currently 6 x Servers Dual Quad Core AMD 2.xGhz with 2 x SATA 7200RPM Hard Disks each formatted with XFS file system connected via 2 x LACP/Bonded 1 GBE. 1 x Metadata server which is also the KVM server. All of the drives I've completed a sequential write dd test I was able to achieve above 60mb/s to 90mb/s each drive doing a 8Gb Seq Write. "dd if=/dev/zero of=ddfile bs=1000M count=8" As Moose writes in 64MB Chunks imagine the speed of each disk should be more the adequate for this test. So my testing so far in my test folder I have set has a goal of 1 and doing this I am able to achieve around 58mb/s - 65mb/s also using a Goal of 2 or 3 is very similar results. Essentially what I'm seeing is the data flows stop and start when looking at the network traffic across all of the machines instead of it being a even flow which I expect is why my speeds are not what I would be expecting, Setup overview, We have 12Gbit of Network Throughput into 6 Servers and 12 disks of worst case avg of 60MB/s per disk which is around 720MB/s in the disks alone. Assuming no replicas I would expect the bottle neck in this case to be the network due to the servers have 2Gbit trunk but maximum speed per transfer is technically 1GBE. However I'm not seeing the bottle necks in the places I would expect them the chunk server seems to be waiting for data and the metadata server is pretty much sleeping from a cpu/io perspective and the test servers network card TX 's/Writes are all over the place.. I have tried jumbo frames as well and no significant improvements were found our test switch is a Dell Power Connect 6248 48Port Gbit Ethernet Switch. I've also completed network speed tests to each of the chunk servers from the test server/metadata server and I was able to achieve 890-940mbit/sec full duplex. Any ideas? Thanks, Quenten |
From: 颜秉珩 <rw...@12...> - 2012-03-13 06:54:45
|
So long to wait a new version! I saw many impressive features on the road map. BinghengYan From: Ken Date: 2012-03-13 14:06 To: moosefs-users Subject: [Moosefs-users] About The Roadmap Hi, all I noticed moosefs team update the roadmap yesterday. http://www.moosefs.org/roadmap.html The schedule and the features are very excited. Thanks so much to the team. Best Regards -Ken |
From: Ken <ken...@gm...> - 2012-03-13 06:06:26
|
Hi, all I noticed moosefs team update the roadmap yesterday. http://www.moosefs.org/roadmap.html The schedule and the features are very excited. Thanks so much to the team. Best Regards -Ken |
From: Ken <ken...@gm...> - 2012-03-13 03:57:37
|
>> The upcoming version has this feature, .... Glad to hear this again, I notice this was January. I am very appreciate working of moosefs team, we are very expected to the version. Best Regards -Ken On Tue, Mar 13, 2012 at 11:37 AM, Davies Liu <dav...@gm...> wrote: > > > On Tue, Mar 13, 2012 at 10:21 AM, Ken <ken...@gm...> wrote: > >> Hi, Davies and Moosefs >> >> I agree Davies' opinion in previous mail, undelete metadata.mfs.bak maybe >> is the best way. >> >> Recovery is very simply. >> >> After downloaded the damaged metadata from ChenGang, I began digging the >> code, comment a few lines(filesystem.c) of "return -1" in fs_loadnodes, >> fs_loadedges, chunk_load(procedures in restore). >> And print some count, like fsnodes count, edge count.., now we know inode >> information is complete, lost a half of edge, chunks totally lost. >> >> The most important information is inode ==> chunk id(s). It's complete, >> data will be remained. >> > > Great work I just did not notice that the relations between inode and > chunk id is in nodes, > I thought they are in chunk block. > > I'm very sorry for the wrong conclusion for Chen Gang, he maybe will lost > data without you help :-( > And thank you for the wonderful hack. > > Then collect all the chunks id, version info from disk of chunkserver, and >> write(use a python script) it to a single file(chunks.bin) which format >> same as metadata.mfs. Here, I made a mistake because of duplicate chunks >> via goal 2. >> >> Make some dirty change of chunk_load(in chunks.c), load chunks.bin >> instead of "metadata.mfs" >> At last execute "mfsmetarestore -o metadata.mfs -m metadata.mfs.part". We >> got the metadata.mfs. >> >> This took me almost 4 hours. >> > > Great efficience. He should buy you a drink :) > > >> >> In the accident, I notice something: >> a. The reason is disk full of master server, Why damaged occured in >> metalogger? Take a close look in fs_storeall(filesystem.c). Why unlink >> ("metadata.mfs") after write metadata failed? >> > > The upcoming version maybe already fixed this issue, need more check. > > >> b. Keep more copies metadata.mfs in metalogger and mfsmaster maybe good. >> > > The upcoming version has this feature, and I have patched my cluster also. > > >> c. Split the huge metadata.mfs to 3 files: inode, edge, chunk maybe be >> benefet in diagnosing, performance optimization. I guess. >> >> Any suggestion? >> >> Best Regards. >> -Ken >> >> >> >> >> On Tue, Mar 13, 2012 at 9:53 AM, Davies Liu <dav...@gm...> wrote: >> >>> Hi, >>> >>> congratulation ! >>> >>> Can you show some details about how to recover the data back ? >>> >>> Davies >>> >>> >>> On Tue, Mar 13, 2012 at 4:24 AM, 陈钢 <yik...@gm...> wrote: >>> >>>> Hi all. I got some message to report. >>>> >>>> The situation I faced is that I got my metadata.mfs broken when >>>> "mfsmaster restart" include the metadata.mfs on mfsmetalogger. >>>> >>>> In fact, the metadata on mfsmetalogger is "more broken" than the >>>> metadata on mfsmaster.That is wired. >>>> >>>> So, I got all my files back but half of them lost their filename. >>>> >>>> Lucky,ken...@gm... helped me,and I have a SQLite file which >>>> stored filesize and filename in it.I have already restored my important >>>> data 90 percent now. >>>> >>>> And , I wrote some script helps me cp the metadata.mfs then rsync it to >>>> another server every hour. just like then suggestion in >>>> http://www.moosefs.org/moosefs-faq.html#metadata-backup. >>>> >>>> Metadata is really important , it worse a incremental backup. >>>> >>>> >>>> 2012/3/9 陈钢 <yik...@gm...> >>>> >>>>> Maybe, 80% data can be found. i still trying. >>>>> i tried restore every metadata.mfs i have, just not work. >>>>> >>>>> >>>>> 2012/3/8 Olivier Thibault <Oli...@lm...> >>>>> >>>>>> Hi, >>>>>> >>>>>> Did you solve your problem ? >>>>>> I had few days ago a mfsmaster crash which went out of memory. >>>>>> When I tried to restart, it crashed saying that there was no >>>>>> metadata.mfs file. >>>>>> I tried "mfsmetarestore -a". It didn't work, saying that >>>>>> metadata.mfs.back was corrupted. >>>>>> There was a metadata.mfs.back file and a metadata.mfs.back.tmp file. >>>>>> metadata.mfs.back was half the size it should be. >>>>>> I restored from a daily backup the metadata.mfs file, then ran again >>>>>> 'mfsmetarestore -a', and this time, it worked. I could then start mfsmaster >>>>>> successfully. >>>>>> Did you tried that ? I mean, just restore the latest working >>>>>> metadata.mfs file ? >>>>>> >>>>>> HTH. >>>>>> >>>>>> Best regards, >>>>>> >>>>>> Olivier >>>>>> >>>>>> >>>>>> >>>>>> Le 07/03/12 04:20, 陈钢 a écrit : >>>>>> >>>>>>> In master`s metadata.mfs,i saw nodes part is complete, part of >>>>>>> names, no free, >>>>>>> no chunks.. >>>>>>> >>>>>>> 2012/3/7 Davies Liu <dav...@gm... <mailto: >>>>>>> dav...@gm...>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I had try to recover it from metadata.mfs and >>>>>>> metadata_ml.mfs.back, but failed. >>>>>>> >>>>>>> Because disk is full, mfsmaster had not dump all the metadata >>>>>>> into disks, >>>>>>> had part of nodes in metadata.mfs, no names, no edges, no chunks. >>>>>>> No hope to recover from the broken metadata.mfs, it's too short. >>>>>>> >>>>>>> The changelogs are also helpless, only in two days. >>>>>>> >>>>>>> The only hope is try to undelete the previous >>>>>>> metadata_ml.mfs.back from >>>>>>> metalogger machine, Chenggang had also failed, with extundelete >>>>>>> and etx3grep, >>>>>>> maybe some experts can archive this. >>>>>>> >>>>>>> The final options is to GUESS the relation between files and >>>>>>> chunks by >>>>>>> chunk id and size fo files, if data lost can not been afforded. >>>>>>> Each chunk is >>>>>>> combined with crc checksum and real data, if we know the relation >>>>>>> between >>>>>>> files and chunksever, then we can get data back. Or we can >>>>>>> contruct the >>>>>>> metadata according to the GUESS, the using mfsmaster to recover >>>>>>> them. >>>>>>> >>>>>>> Davies >>>>>>> >>>>>>> On Wed, Mar 7, 2012 at 10:56 AM, Ken <ken...@gm... >>>>>>> <mailto:ken...@gm...>> wrote: >>>>>>> >>>>>>> Hi, chengang >>>>>>> >>>>>>> I think you should try more, and post detail here. Someone >>>>>>> must resolve it. >>>>>>> Maybe you will lost some data in last few minutes, but 250T >>>>>>> should be saved. >>>>>>> >>>>>>> At first, BACKUP all files: >>>>>>> /var/lib/mfs/* on master >>>>>>> /var/lib/mfs/* on mfsmetalogger >>>>>>> >>>>>>> about restore error: >>>>>>> >>>>>>> file 'metadata.mfs.back' not found - will try >>>>>>> 'metadata_ml.mfs.back' >>>>>>> instead >>>>>>> loading objects (files,directories,etc.) ... loading >>>>>>> node: read >>>>>>> error: ENOENT (No such file or directory) >>>>>>> error >>>>>>> can't read metadata from file: .//metadata_ml.mfs.back >>>>>>> >>>>>>> How did you run mfsmetarestore? add -d options? >>>>>>> If stat(datapath + metadata_ml.mfs.back) fail, these error >>>>>>> will occur. >>>>>>> Maybe use strace will show why stat fail exactly. >>>>>>> >>>>>>> ps: I am in Beijing now and I can provide more help. >>>>>>> >>>>>>> HTH >>>>>>> >>>>>>> -Ken >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Mar 7, 2012 at 10:21 AM, 陈钢 <yik...@gm... >>>>>>> <mailto:yik...@gm...**>> wrote: >>>>>>> >>>>>>> can not start mfsmaster with the file "78962688 Mar 6 >>>>>>> 17:18 >>>>>>> metadata.mfs ".. >>>>>>> i tried that. :( >>>>>>> >>>>>>> 2012/3/7 Ricardo J. Barberis < >>>>>>> ric...@da... >>>>>>> <mailto:ricardo.barberis@**dattatec.com<ric...@da...> >>>>>>> >> >>>>>>> >>>>>>> >>>>>>> El Martes 06/03/2012, 陈钢 escribió: >>>>>>> > on master >>>>>>> [ ... ] >>>>>>> > -rw-r----- 1 mfs mfs 78962688 Mar 6 17:18 >>>>>>> metadata.mfs >>>>>>> > -rw-r--r-- 1 root root 8 Jul 4 2011 >>>>>>> metadata.mfs.empty >>>>>>> > -rw-r----- 1 mfs mfs 5984 Mar 6 12:00 >>>>>>> sessions.mfs >>>>>>> > -rw-r----- 1 mfs mfs 0 Mar 6 16:46 >>>>>>> sessions.mfs.tmp >>>>>>> > -rw-r----- 1 mfs mfs 131072 Mar 6 17:18 >>>>>>> stats.mfs >>>>>>> >>>>>>> You have /var/lib/mfs/metadata.mfs on the master, it >>>>>>> might not >>>>>>> be corrupt >>>>>>> after all? >>>>>>> >>>>>>> I'd suggest: >>>>>>> >>>>>>> - backup /var/lib/mfs to another disk/server (for >>>>>>> later recovery >>>>>>> if needed) >>>>>>> - make sure you have free space in your main disk >>>>>>> - then simply try to start mfsmaster >>>>>>> - check mfs.cgi (web interface) if it looks OK >>>>>>> >>>>>>> >>>>>>> BUT: if you can, wait for confirmation from Michał >>>>>>> Borychowski >>>>>>> first, in case >>>>>>> what I'm telling you is not safe. >>>>>>> >>>>>>> >>>>>>> (BTW: You have Reply-To set to che...@cp... >>>>>>> <mailto:che...@cp...>, I don't know if that's >>>>>>> >>>>>>> intentional on your part) >>>>>>> >>>>>>> Hope it helps, >>>>>>> -- >>>>>>> Ricardo J. Barberis >>>>>>> Senior SysAdmin / ITI >>>>>>> Dattatec.com :: Soluciones de Web Hosting >>>>>>> Tu Hosting hecho Simple! >>>>>>> >>>>>>> ------------------------------**------------ >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------** >>>>>>> ------------------------------**------------------ >>>>>>> Virtualization & Cloud Management Using Capacity Planning >>>>>>> Cloud computing makes use of virtualization - but cloud >>>>>>> computing >>>>>>> also focuses on allowing computing to be delivered as a >>>>>>> service. >>>>>>> http://www.accelacomm.com/jaw/**sfnl/114/51521223/<http://www.accelacomm.com/jaw/sfnl/114/51521223/> >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> moosefs-users mailing list >>>>>>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>>>>>> <mailto:moosefs-users@lists.**sourceforge.net<moo...@li...> >>>>>>> > >>>>>>> >>>>>>> https://lists.sourceforge.net/** >>>>>>> lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------** >>>>>>> ------------------------------**------------------ >>>>>>> Virtualization & Cloud Management Using Capacity Planning >>>>>>> Cloud computing makes use of virtualization - but cloud >>>>>>> computing >>>>>>> also focuses on allowing computing to be delivered as a >>>>>>> service. >>>>>>> http://www.accelacomm.com/jaw/**sfnl/114/51521223/<http://www.accelacomm.com/jaw/sfnl/114/51521223/> >>>>>>> ______________________________**_________________ >>>>>>> moosefs-users mailing list >>>>>>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>>>>>> <mailto:moosefs-users@lists.**sourceforge.net<moo...@li...> >>>>>>> > >>>>>>> >>>>>>> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> - Davies >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------**------------------------------** >>>>>>> ------------------ >>>>>>> Virtualization& Cloud Management Using Capacity Planning >>>>>>> >>>>>>> Cloud computing makes use of virtualization - but cloud computing >>>>>>> also focuses on allowing computing to be delivered as a service. >>>>>>> http://www.accelacomm.com/jaw/**sfnl/114/51521223/<http://www.accelacomm.com/jaw/sfnl/114/51521223/> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ______________________________**_________________ >>>>>>> moosefs-users mailing list >>>>>>> moosefs-users@lists.**sourceforge.net<moo...@li...> >>>>>>> https://lists.sourceforge.net/**lists/listinfo/moosefs-users<https://lists.sourceforge.net/lists/listinfo/moosefs-users> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >>> >>> -- >>> - Davies >>> >> >> > > > -- > - Davies > |