You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Kristofer P. <kri...@cy...> - 2011-07-30 01:54:33
|
> I've Googled up and down the planet and haven't found any recent > packages for Ubuntu, or even anybody discussing them. This seems so > strange that I'm almost convinced that I'm missing something. > > Am I missing something, or is there really nobody maintaining packages > for Ubuntu/Debian? I don't think anyone is maintaining a package, but it's extremely easy to compile it for Ubuntu. You just need to install the fuse and fuse-devel packages through apt-get. I think there were one or two others -- it's been a while since I've actually had to deploy a new MFS machine. |
From: Dennis J. <den...@co...> - 2011-07-30 01:46:17
|
I already had iptables disabled and now I also disabled selinux on all systems but I'm still seeing this problem. This time (after I rebooted all systems once) when I issue the copy command nothing happens for about 60 seconds and the I get the "no space" error again: [root@centos6 ~]# mfsmount /mnt/mfs/ -H mfsmaster mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root [root@centos6 ~]# df -h |grep -E "mfs|Avail" Filesystem Size Used Avail Use% Mounted on mfsmaster:9421 1.4G 0 1.4G 0% /mnt/mfs [root@centos6 ~]# cp sf2.stap /mnt/mfs/test/ cp: closing `/mnt/mfs/test/sf2.stap': No space left on device [root@centos6 ~]# df -h |grep -E "mfs|Avail" Filesystem Size Used Avail Use% Mounted on mfsmaster:9421 1.4G 0 1.4G 0% /mnt/mfs [root@centos6 ~]# ls -l /mnt/mfs/test/ total 0 -rw-r--r--. 1 root root 0 Jul 30 03:35 sf2.stap Regards, Dennis On 07/30/2011 02:59 AM, WK wrote: > Have you verified that you aren't firewalling out any MFS ports between > the client/master/chunkservers? > > Try with iptables turned off and see if you still see the issues. > Absent that review your mount command. > > Also check make sure selinux is not getting in the way. > > -wk > > > > On 7/29/11 1:20 PM, Dennis Jacobfeuerborn wrote: >> Hi, >> I'm starting to play with MooseFS and while the setup seemed to work fine I >> ran into trouble after mounting it. >> >> My Setup is a Centos 6 host system and four Centos 6 virtual machines >> installed as master, metalogger and two chunk servers. >> Installing everything went smoothly and I can see the two chunk server in >> the cgi stats interface. Both chunk servers have a 1gb partition formatted >> as ext4 mounted under /mnt/data for the storage. >> The cgi interface shows 2GiB total space and 1.3GiB available. >> >> I can mount the filesystem on a client and df -h shows 1.4G of free space >> which looks ok. However the moment I try to copy a file no matter how small >> I immediately get a "No space left on device" error. Doing a "ls" after >> that shows the file on the filesystem with a size of 0 bytes. A "mkdir >> /mnt/mfs/test" for example works fine though. >> >> I see the following in the syslog on the client: >> >> Jul 29 21:47:11 centos6 mfsmount[9930]: file: 6, index: 0 - fs_writechunk >> returns status 21 >> Jul 29 21:47:11 centos6 mfsmount[9930]: error writing file number 6: ENOSPC >> (No space left on device) >> >> Any ideas what the problem could be? >> >> Regards, >> Dennis >> >> ------------------------------------------------------------------------------ >> Got Input? Slashdot Needs You. >> Take our quick survey online. Come on, we don't ask for help often. >> Plus, you'll get a chance to win $100 to spend on ThinkGeek. >> http://p.sf.net/sfu/slashdot-survey >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > ------------------------------------------------------------------------------ > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: WK <wk...@bn...> - 2011-07-30 00:59:36
|
Have you verified that you aren't firewalling out any MFS ports between the client/master/chunkservers? Try with iptables turned off and see if you still see the issues. Absent that review your mount command. Also check make sure selinux is not getting in the way. -wk On 7/29/11 1:20 PM, Dennis Jacobfeuerborn wrote: > Hi, > I'm starting to play with MooseFS and while the setup seemed to work fine I > ran into trouble after mounting it. > > My Setup is a Centos 6 host system and four Centos 6 virtual machines > installed as master, metalogger and two chunk servers. > Installing everything went smoothly and I can see the two chunk server in > the cgi stats interface. Both chunk servers have a 1gb partition formatted > as ext4 mounted under /mnt/data for the storage. > The cgi interface shows 2GiB total space and 1.3GiB available. > > I can mount the filesystem on a client and df -h shows 1.4G of free space > which looks ok. However the moment I try to copy a file no matter how small > I immediately get a "No space left on device" error. Doing a "ls" after > that shows the file on the filesystem with a size of 0 bytes. A "mkdir > /mnt/mfs/test" for example works fine though. > > I see the following in the syslog on the client: > > Jul 29 21:47:11 centos6 mfsmount[9930]: file: 6, index: 0 - fs_writechunk > returns status 21 > Jul 29 21:47:11 centos6 mfsmount[9930]: error writing file number 6: ENOSPC > (No space left on device) > > Any ideas what the problem could be? > > Regards, > Dennis > > ------------------------------------------------------------------------------ > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Atom P. <ap...@di...> - 2011-07-30 00:09:34
|
I've Googled up and down the planet and haven't found any recent packages for Ubuntu, or even anybody discussing them. This seems so strange that I'm almost convinced that I'm missing something. Am I missing something, or is there really nobody maintaining packages for Ubuntu/Debian? -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology (425) 895-4443 |
From: Dennis J. <den...@co...> - 2011-07-29 20:46:49
|
Hi, I'm starting to play with MooseFS and while the setup seemed to work fine I ran into trouble after mounting it. My Setup is a Centos 6 host system and four Centos 6 virtual machines installed as master, metalogger and two chunk servers. Installing everything went smoothly and I can see the two chunk server in the cgi stats interface. Both chunk servers have a 1gb partition formatted as ext4 mounted under /mnt/data for the storage. The cgi interface shows 2GiB total space and 1.3GiB available. I can mount the filesystem on a client and df -h shows 1.4G of free space which looks ok. However the moment I try to copy a file no matter how small I immediately get a "No space left on device" error. Doing a "ls" after that shows the file on the filesystem with a size of 0 bytes. A "mkdir /mnt/mfs/test" for example works fine though. I see the following in the syslog on the client: Jul 29 21:47:11 centos6 mfsmount[9930]: file: 6, index: 0 - fs_writechunk returns status 21 Jul 29 21:47:11 centos6 mfsmount[9930]: error writing file number 6: ENOSPC (No space left on device) Any ideas what the problem could be? Regards, Dennis |
From: Kristofer P. <kri...@cy...> - 2011-07-27 19:59:00
|
Are there any plans to have an update to MooseFS in the near future? I'm excited to see new features, especially location awareness. |
From: <wk...@bn...> - 2011-07-27 06:11:22
|
CentOS 5.x on all. -wk On 7/26/11 9:41 PM, Mostafa Rokooie wrote: > Thanks for you reply, > May I know which operating systems are you using for Master and > Chunkservers? > > --Mostafa Rokooie |
From: Testing <te...@no...> - 2011-07-26 23:14:34
|
From: WK <wk...@bn...> - 2011-07-26 22:29:07
|
We have several MooseFS clusters each with 20,000-50,000 users for IMAP Maildir storage for a few months now. We have been very happy with the results to date, compared to our old solutions. Comments: Each cluster has at least 4 chunkservers, some of which are the metaloggers. Pay attention to the RAM requirements, as a chunkserver that goes into swap is going to slow things down and MooseFS chunks eat RAM. The chunkservers aren't very powerful CPU wise, generally older Opterons that were repurposed and SATA drives. We use dual power supply ECC RAM servers for the Masters, with lots of RAM. You don't want to lose your Master, though the one time we did have an issue, we were able to recover from the metalogger with very very minimal loss, which mostly amounted to the files that were 'on the fly' at the time of the failure. You'd see the same thing with an NFS server. We use a Goal of 3 because if a chunk server dies it can take weeks for it to rebalance. Knowing there are still two copies during the rebalance gives one peace of mind. We set the trashtime to offset 12 hours because if you delete a large account with hundreds of thousands of files in their folders, when the trashtime expires, the mount performance is really affected as the server tries to delete all those chunks and their "goal" copies. With an offset of 12 hours that generally happens at a less busy time (Note: I filed a bug describing the problem on SourceForge. They really need some sort of way to 'nice' those large deletions). MooseFS is not efficient with small files in regards to storage size. You still get the same size chunk no matter the file size. So between the goal setting and the chunk size expect a 5x-7x increase in your total storage requirement over a jbod disk setup. Our attitude is that SATA hard disks are cheap and due to the goal replication of three its not that big a deal if one dies. Certainly not worth paying 5x for Enterprise drives in this situation. We use multiple MooseFS Clusters because we are paranoid and didn't want to put all our eggs in one basket, however, we would be comfortable expanding the original clusters as the need occurs by adding additional/bigger drives and or chunkservers. -WK On 7/26/11 2:26 AM, Mostafa Rokooie wrote: > I'm going to use MooseFS as my mail server's backend storage, I did > many research to find a suitable fault tolerant distributed file > system, finally I think that MooseFS can be a good choice for me, > We're going to have large amount of users (100,000+). > Is here anybody who experienced MooseFS as a mail server's storage? > > > ------------------------------------------------------------------------------ > Magic Quadrant for Content-Aware Data Loss Prevention > Research study explores the data loss prevention market. Includes in-depth > analysis on the changes within the DLP market, and the criteria used to > evaluate the strengths and weaknesses of these DLP solutions. > http://www.accelacomm.com/jaw/sfnl/114/51385063/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Mostafa R. <mos...@gm...> - 2011-07-26 09:26:26
|
I'm going to use MooseFS as my mail server's backend storage, I did many research to find a suitable fault tolerant distributed file system, finally I think that MooseFS can be a good choice for me, We're going to have large amount of users (100,000+). Is here anybody who experienced MooseFS as a mail server's storage? |
From: Robert S. <rsa...@ne...> - 2011-07-24 03:37:58
|
Some of this the developers will have to answer. What we have seen is that there does seem to be some limitation in MooseFS on the maximum number of operations per second on a specific mount. Network latencies seems to have a large influence on overall IOPS. Given the right conditions we could max out the network connections with no problems. We did not have 10 GB/s to play with but multiple bound 1 GB/s network cards. Depending on the size of the average transfer you are doing the bus seems to be easy to saturate. In our use case the IOPS of the drives and the network latency limitations seems to be saturated before the bus are saturated. Will more cores be useful for MooseFS?: My guess is that depends on what else you are doing on the same machine. If you only use it as a chunkserver then I think the bus and the disk IOPS limits will be reached way before you will be CPU constrained. If you also mount the filesystem and export it using something like Samba or Apache then the nature of Samba and Apache may make you CPU constrained. For example, a very non-scientific measurement is that I have seen around 1.75 million operations per hour on our system. This is around 486 per second. I need to stress that this is from a random sampling of looking at the CGI interface and may not be the maximum possible on our network. This is with the load distributed between 3 mounts from the same mfsmaster. I suspect we will be able to raise it a little by distributing the load between 4 mounts. But it does not scale linearly. I also have to stress that this is not IOPS. This is how MooseFS define operations. If I look at the CPU usage of mfsmount, mfschunkserver and mfsmaster then it is negligible. RAM usage is significant and disk utilization is significant. I/O wait time is significant. What would be interesting is to see what would have the biggest effect on overall IOPS: 10 GB/s ethernet or Infiniband with its lower throughput but also much lower latencies. A answer I did get from the developers implied that upgrading to 10 GB/s may not have much of an effect on the overall IOPS although it probably will have an effect on total throughput. Saturating a 6 Gbps SAS2 link can be done with less than 12 SAS drives. 88 MB/s x 12 = 1.056 GB/s or around 8.4 Gb/s. That is if we assume that the average 10k RPM SAS drive has an average transfer rate of 88 MB/s. This is sort of a worst case scenario but it proves my point: A single SAS link is relatively trivial to saturate given a modest number of spindles. Robert On 7/23/11 9:16 PM, Elliot Finley wrote: > On Thu, Jul 21, 2011 at 8:12 PM, Robert Sandilands > <rsa...@ne...> wrote: >> The only time that many spindles will be remotely useful on 1 controller >> is if you are doing a large amount of small random read/write accesses. >> For anything else you are more likely to saturate the bus connecting the >> machine to the controller/expander. > I appreciate you taking the time to answer. What you're saying here > is a common sense answer, but it doesn't really answer my specific > question. Most likely because I didn't state it clear enough. So > here is my question in a (hopefully) more clear fashion: > > Is MooseFS sufficiently multithreaded/multi-process to be able to make > use of lots of spindles to handle lots of IOPS... To the point of > saturating the 6Gbps SAS2 link? This obviously implies a 10G ethernet > link to the chunkserver. > > Also asked another way: does having more cores == more performance? > i.e. Can MooseFS take advantage of multiple cores? to what extent? > > I ask to what extent because a multithreaded/multi-process daemon can > get hung up with lock contention. > > TIA > Elliot |
From: Elliot F. <efi...@gm...> - 2011-07-24 01:19:22
|
I forgot to add that my question implies lots of very small files. Would I be able to scale a chunkserver by using sufficent cores/spindles to the point of saturating the 6Gbps SAS2 link with a workload of lots of very small files? On Sat, Jul 23, 2011 at 7:16 PM, Elliot Finley <efi...@gm...> wrote: > On Thu, Jul 21, 2011 at 8:12 PM, Robert Sandilands > <rsa...@ne...> wrote: >> The only time that many spindles will be remotely useful on 1 controller >> is if you are doing a large amount of small random read/write accesses. >> For anything else you are more likely to saturate the bus connecting the >> machine to the controller/expander. > > I appreciate you taking the time to answer. What you're saying here > is a common sense answer, but it doesn't really answer my specific > question. Most likely because I didn't state it clear enough. So > here is my question in a (hopefully) more clear fashion: > > Is MooseFS sufficiently multithreaded/multi-process to be able to make > use of lots of spindles to handle lots of IOPS... To the point of > saturating the 6Gbps SAS2 link? This obviously implies a 10G ethernet > link to the chunkserver. > > Also asked another way: does having more cores == more performance? > i.e. Can MooseFS take advantage of multiple cores? to what extent? > > I ask to what extent because a multithreaded/multi-process daemon can > get hung up with lock contention. > > TIA > Elliot > |
From: Elliot F. <efi...@gm...> - 2011-07-24 01:16:33
|
On Thu, Jul 21, 2011 at 8:12 PM, Robert Sandilands <rsa...@ne...> wrote: > The only time that many spindles will be remotely useful on 1 controller > is if you are doing a large amount of small random read/write accesses. > For anything else you are more likely to saturate the bus connecting the > machine to the controller/expander. I appreciate you taking the time to answer. What you're saying here is a common sense answer, but it doesn't really answer my specific question. Most likely because I didn't state it clear enough. So here is my question in a (hopefully) more clear fashion: Is MooseFS sufficiently multithreaded/multi-process to be able to make use of lots of spindles to handle lots of IOPS... To the point of saturating the 6Gbps SAS2 link? This obviously implies a 10G ethernet link to the chunkserver. Also asked another way: does having more cores == more performance? i.e. Can MooseFS take advantage of multiple cores? to what extent? I ask to what extent because a multithreaded/multi-process daemon can get hung up with lock contention. TIA Elliot |
From: Ricardo J. B. <ric...@da...> - 2011-07-22 17:56:31
|
El Vie 22 Julio 2011, Kef escribió: > Hello ! > > I have some difficulties to configure my chunk servers and a client. > The two chunks server (and the master) are in the same network (a private > network in 192.168.0.X). All this servers share the same public IP address > and are accessible from the outside using NAT. > > I have a client which is outside the network. It can mount the filesystem a > write to it, but it can't download the files (and after some delay, a "ls > -la" displays a size of 0 for each file). > > Is it possible to use MFS with NAT, where my chunks server are in the same > network as my master server (they are connected to the master server using > the public adress IP, the 9420 port is redirected to the master server) > with a client outside of the network ? I don't think you can, at least not easily. From the docs [1]: "Note: It's important which local IP address mfschunkserver uses to connect to mfsmaster. This address is passed by mfsmaster to MFS clients (mfsmount) and other chunkservers to communicate with the chunkserver, so it must be remotely accessible. Thus master address (MASTER_HOST) must be set to such for which chunkserver will use proper local address to connect - usually belonging to the same network as all MFS clients and other chunkservers. Generally loopback addres (localhost, 127.0.0.1) can't be used as MASTER_HOST, as it would make the chunkserver inaccessible for any other host (such configuration would work only on single machine running all of mfsmaster, mfschunkserver and mfsmount)." [1] http://www.moosefs.org/reference-guide.html Cheers. -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! |
From: Kef <ke...@gm...> - 2011-07-22 14:53:54
|
Hello ! I have some difficulties to configure my chunk servers and a client. The two chunks server (and the master) are in the same network (a private network in 192.168.0.X). All this servers share the same public IP address and are accessible from the outside using NAT. I have a client which is outside the network. It can mount the filesystem a write to it, but it can't download the files (and after some delay, a "ls -la" displays a size of 0 for each file). Is it possible to use MFS with NAT, where my chunks server are in the same network as my master server (they are connected to the master server using the public adress IP, the 9420 port is redirected to the master server) with a client outside of the network ? Thanks for your help |
From: Elliot F. <efi...@gm...> - 2011-07-22 02:37:16
|
how many cores can a chunkserver/masterserver make use of? TIA Elliot |
From: Robert S. <rsa...@ne...> - 2011-07-22 02:12:42
|
The only time that many spindles will be remotely useful on 1 controller is if you are doing a large amount of small random read/write accesses. For anything else you are more likely to saturate the bus connecting the machine to the controller/expander. Performance, as I see it, is divided into at least 2 types of performance: 1. Transfer speed 2. IOPS (I/O Operations per second) For completely serial jobs you can get high transfer speeds with very few spindles. The more clients or the more random the load generated by your clients the more spindles becomes useful. But there is a tradeoff. If you are reading large numbers of large files then transfer speed may be your limiting factor. If you read large numbers of small files then IOPS may be. We have 68 spindles distributed between 2 chunkservers and we are maxed out on IOPS on the one chunkserver. On the other chunkserver we are around 60%. But our load pattern is random reads of millions of small files ( < 600 kB ). The reason for the imbalance is historical and will go away in the next few weeks. When you start dealing with large numbers of disks on a single controller/device you will be surprised what bottlenecks exist. The RAID controller rapidly becomes your bottleneck when it comes to transfer speed. In my experience it seems to be better to have either multiple controllers, multiple servers or multiple channels if you are talking to so many spindles. In the case of multiple controllers and channels you may still end up saturating the PCI bus or your network interfaces. At some stage a storage company tried to convince us to buy an enclosure with 12 x SSD drives. I asked them to do some performance tests and the numbers were quite interesting. The SSD's could quickly saturate the bus. In transfer speed 12 x SSD was not any faster than 12 x SAS drives. The reason: the SAS bus got saturated in both cases. In IOPS the SSD drives were better until it got to a point where the bus were saturated and then adding more SSD drives did not provide any advantage. Robert On 7/21/11 2:46 PM, Elliot Finley wrote: > Does putting more spindles on a chunkserver increase performance? If > so, to what point? > > I'm specifically thinking of a SAS2 controller/SAS2 expander with 127 > drives attached. Does the chunkserver daemon have the capability to > efficiently use that many spindles? > > TIA > Elliot > > ------------------------------------------------------------------------------ > 5 Ways to Improve& Secure Unified Communications > Unified Communications promises greater efficiencies for business. UC can > improve internal communications as well as offer faster, more efficient ways > to interact with customers and streamline customer service. Learn more! > http://www.accelacomm.com/jaw/sfnl/114/51426253/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Elliot F. <efi...@gm...> - 2011-07-21 18:46:59
|
Does putting more spindles on a chunkserver increase performance? If so, to what point? I'm specifically thinking of a SAS2 controller/SAS2 expander with 127 drives attached. Does the chunkserver daemon have the capability to efficiently use that many spindles? TIA Elliot |
From: WK <wk...@bn...> - 2011-07-21 17:35:11
|
On 7/21/2011 10:30 AM, WK wrote: > OK, despite having previously set our CHUNKS_DEL_LIMIT to 20 a few > days ago and restarting mfsmaster we are once again seeing 10K+ chunk > deletions a second and it is impacting the cluster (though not as > badly this time as we have more memory in our chunkservers). How do we > limit this? CHUNKS_DEL_LIMIT seems to work under normal loads, but the > system seems to ignore it at times. Further information the individual logs for the chunkservers show chunk deletions instances as high as 30K per minute at times. -bill |
From: WK <wk...@bn...> - 2011-07-21 17:31:23
|
On 6/27/2011 8:52 AM, Ólafur Ósvaldsson wrote: > Hi, > Our system had just over 4.5 million chunks untill one week ago when a good deal of the data was deleted (on purpose). > > Today the trash time expired and it seems that our master is deleting all the unused chunks which is quite normal except each server of 10 is doing up to 5.5k chunk deletions pr. minute and all systems accessing the MFS partitions are very slow and that started at the same time as the chunk deletions. OK, despite having previously set our CHUNKS_DEL_LIMIT to 20 a few days ago and restarting mfsmaster we are once again seeing 10K+ chunk deletions a second and it is impacting the cluster (though not as badly this time as we have more memory in our chunkservers). How do we limit this? CHUNKS_DEL_LIMIT seems to work under normal loads, but the system seems to ignore it at times. -bill |
From: Anh K. H. <ky...@vi...> - 2011-07-20 12:33:35
|
On Wed, 20 Jul 2011 07:06:34 -0400 Vineet Jain <vin...@gm...> wrote: > This works great if you have the extra disks. However, what if you > want to: > > - shutdown foobar > - move the physical disks to another server > - startup foobar on another server. > > Is that possible? The answer depends on your MFS setup: number of chunk servers, goals of all files, number of disks and the size of disks,... Could you please give more details about your MFS configuration? *For example*, if all files have at least goal 2 (this means you have at least two chunk servers) and all files are good (check in MFS.cgi), you may shutdown the chunk server "foobar" as you've said. You should move the disks and server as soon as possible. Please note this isn't the safest method to move chunk server. Actually I used this way for one of my MFS setups, but I don't recommend it :D Please note, when moving a chunk server, all MFS configuration and variant files (/var/lib/mfs/) should be moved to new server. Hope this helps. Regards, > On Wed, Jul 20, 2011 at 3:25 AM, Anh K. Huynh <ky...@vi...> > wrote: > > On Tue, 19 Jul 2011 14:02:04 -0400 > > Vineet Jain <vin...@gm...> wrote: > > > >> I currently have the mds server and chunk server running on one > >> server. At some point I will have to move the chunk server to > >> another machine. How would I go about doing that? Can I shut down > >> everything. And just move the hard drives to another machine and > >> make sure each hard drive gets mounted to the exact same point > >> and restart the services? > > > > >From my practice: > > > > (1) Set up new chunk server and make it available on your MFS > > setup (2) On the chunk server (Foobar) that want to destroy, mark > > your disks as "removal" and reload/restart process > > (3) Wait for the complete status of replication > > (4) Shutdown the chunk server (Foobar) > > > > Feel free to read details at > > http://www.moosefs.org/moosefs-faq.html#add_remove > > http://www.moosefs.org/moosefs-faq.html#mark_for_removal > > ... -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: Anh K. H. <ky...@vi...> - 2011-07-20 07:26:11
|
On Tue, 19 Jul 2011 14:02:04 -0400 Vineet Jain <vin...@gm...> wrote: > I currently have the mds server and chunk server running on one > server. At some point I will have to move the chunk server to > another machine. How would I go about doing that? Can I shut down > everything. And just move the hard drives to another machine and > make sure each hard drive gets mounted to the exact same point and > restart the services? From my practice: (1) Set up new chunk server and make it available on your MFS setup (2) On the chunk server (Foobar) that want to destroy, mark your disks as "removal" and reload/restart process (3) Wait for the complete status of replication (4) Shutdown the chunk server (Foobar) Feel free to read details at http://www.moosefs.org/moosefs-faq.html#add_remove http://www.moosefs.org/moosefs-faq.html#mark_for_removal Hope this helps. Regards, -- Anh Ky Huynh @ ICT Registered Linux User #392115 |
From: WK <wk...@bn...> - 2011-07-19 18:32:18
|
On 7/19/11 10:58 AM, Vineet Jain wrote: >> I thought the OP was asking: If I have one physical server with 6 >> drives in it, would it be better to run a single chunkserver process >> and give it all six drives or run 6 chunkserver processes giving them >> 1 drive each. > That's exactly what I was asking. Ugh. Sorry, that idea didn't even occur to me, so I didn't even read it that way. That being said, running 6 chunkserver processes on the same machine sounds like a recipe for a trainwreck if you slip up. MFS has some 'interesting' behaviour when it comes to rebalancing etc, so I'm not sure how it would treat those. -bill |
From: Vineet J. <vin...@gm...> - 2011-07-19 18:02:11
|
I currently have the mds server and chunk server running on one server. At some point I will have to move the chunk server to another machine. How would I go about doing that? Can I shut down everything. And just move the hard drives to another machine and make sure each hard drive gets mounted to the exact same point and restart the services? |
From: Vineet J. <vin...@gm...> - 2011-07-19 17:58:35
|
> I thought the OP was asking: If I have one physical server with 6 > drives in it, would it be better to run a single chunkserver process > and give it all six drives or run 6 chunkserver processes giving them > 1 drive each. That's exactly what I was asking. |