From: Steve T. <sm...@cb...> - 2012-03-28 19:33:45
|
MooseFS 1.6.20, Linux (Centos 5.7). I have discovered that neither firefox nor thunderbird (V10) will run correctly when launched from a MooseFS-based home directory; they just hang indefinitely. Does anyone else have this experience, or have I screwed up somewhere? Thanks, Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: Steve W. <st...@pu...> - 2012-03-28 19:47:04
|
On 03/28/2012 03:33 PM, Steve Thompson wrote: > MooseFS 1.6.20, Linux (Centos 5.7). > > I have discovered that neither firefox nor thunderbird (V10) will run > correctly when launched from a MooseFS-based home directory; they just > hang indefinitely. Does anyone else have this experience, or have I > screwed up somewhere? Thanks, > > Steve We're using MooseFS 1.6.20 for home directories in an Ubuntu environment. We do get the occasional "fs_writechunk returns status 11" error noted earlier by Brent Nelson on this list but no problems like what you're seeing. I know we have users running Firefox but I'm not sure if anyone is using Thunderbird. We've been extremely pleased with MooseFS in the 12 months or so that we've been using it. Steve W. |
From: Brent A N. <br...@ph...> - 2012-03-28 20:00:13
|
I, too, have encountered no issues with firefox under Ubuntu (8.04 or 10.04). I haven't tried Thunderbird. OpenOffice has an issue with the same effect as what you describe, but it's a known OpenOffice bug, and it's fixed in LibreOffice. Incidentally, for the "fs_writechunk returns status 11" messages, I've told google-chrome to disable its cache (-disk-cache-dir="/dev/null"), which I suspect will do the trick. Chrome also now seems much faster... On Wed, 28 Mar 2012, Steve Wilson wrote: > On 03/28/2012 03:33 PM, Steve Thompson wrote: >> MooseFS 1.6.20, Linux (Centos 5.7). >> >> I have discovered that neither firefox nor thunderbird (V10) will run >> correctly when launched from a MooseFS-based home directory; they just >> hang indefinitely. Does anyone else have this experience, or have I >> screwed up somewhere? Thanks, >> >> Steve > > We're using MooseFS 1.6.20 for home directories in an Ubuntu > environment. We do get the occasional "fs_writechunk returns status 11" > error noted earlier by Brent Nelson on this list but no problems like > what you're seeing. I know we have users running Firefox but I'm not > sure if anyone is using Thunderbird. We've been extremely pleased with > MooseFS in the 12 months or so that we've been using it. > > Steve W. > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Steve W. <st...@pu...> - 2012-03-28 20:04:38
|
On 03/28/2012 04:00 PM, Brent A Nelson wrote: > I, too, have encountered no issues with firefox under Ubuntu (8.04 or > 10.04). I haven't tried Thunderbird. OpenOffice has an issue with > the same effect as what you describe, but it's a known OpenOffice bug, > and it's fixed in LibreOffice. > > Incidentally, for the "fs_writechunk returns status 11" messages, I've > told google-chrome to disable its cache (-disk-cache-dir="/dev/null"), > which I suspect will do the trick. Chrome also now seems much faster... > We see the status 11 errors whenever multiple workstations attempt to simultaneously write to the same file in a MooseFS volume. Steve > On Wed, 28 Mar 2012, Steve Wilson wrote: > >> On 03/28/2012 03:33 PM, Steve Thompson wrote: >>> MooseFS 1.6.20, Linux (Centos 5.7). >>> >>> I have discovered that neither firefox nor thunderbird (V10) will run >>> correctly when launched from a MooseFS-based home directory; they just >>> hang indefinitely. Does anyone else have this experience, or have I >>> screwed up somewhere? Thanks, >>> >>> Steve >> >> We're using MooseFS 1.6.20 for home directories in an Ubuntu >> environment. We do get the occasional "fs_writechunk returns status 11" >> error noted earlier by Brent Nelson on this list but no problems like >> what you're seeing. I know we have users running Firefox but I'm not >> sure if anyone is using Thunderbird. We've been extremely pleased with >> MooseFS in the 12 months or so that we've been using it. >> >> Steve W. >> >> ------------------------------------------------------------------------------ >> >> This SF email is sponsosred by: >> Try Windows Azure free for 90 days Click Here >> http://p.sf.net/sfu/sfd2d-msazure >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users >> -- Steven M. Wilson, Systems and Network Manager Markey Center for Structural Biology Purdue University (765) 496-1946 |
From: Dr. M. J. C. <mj...@av...> - 2012-03-28 20:03:32
|
On 03/28/2012 03:33 PM, Steve Thompson wrote: > MooseFS 1.6.20, Linux (Centos 5.7). > > I have discovered that neither firefox nor thunderbird (V10) will run > correctly when launched from a MooseFS-based home directory; they just > hang indefinitely. Does anyone else have this experience, or have I > screwed up somewhere? Thanks, > > Steve I'm running 1.6.20 on Fedora 16 servers, with several Fedora 16 clients (who have their home folders on the moosefs mount). Firefox and Thunderbird work just fine. No issues at all. LibreOffice was painfully slow, until I converted the chunkserver hard drives to SSDs. I think LibreOffice was fsyncing way too frequently (just a theory). Speedy Intel SSDs made that problem go away. - Mike |
From: Steve T. <sm...@cb...> - 2012-03-29 15:45:46
|
On Wed, 28 Mar 2012, Dr. Michael J. Chudobiak wrote: > I'm running 1.6.20 on Fedora 16 servers, with several Fedora 16 clients > (who have their home folders on the moosefs mount). > > Firefox and Thunderbird work just fine. No issues at all. I have further found that Thunderbird works well, but Firefox is so painfully slow (glacial) as to be unusable. For the time being, I have had to relocate the .mozilla directories to a non-MFS file system and replace them by symbolic links. Steve |
From: Brent A N. <br...@ph...> - 2012-03-29 15:51:17
|
Have you tried just disabling the disk cache? It might eliminate the slowness. Web caches tend to consist of very large numbers of mostly small files; that's not the most efficient situation, especially for typical network filesystems... On Thu, 29 Mar 2012, Steve Thompson wrote: > On Wed, 28 Mar 2012, Dr. Michael J. Chudobiak wrote: > >> I'm running 1.6.20 on Fedora 16 servers, with several Fedora 16 clients >> (who have their home folders on the moosefs mount). >> >> Firefox and Thunderbird work just fine. No issues at all. > > I have further found that Thunderbird works well, but Firefox is so > painfully slow (glacial) as to be unusable. For the time being, I have had > to relocate the .mozilla directories to a non-MFS file system and replace > them by symbolic links. > > Steve > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users > |
From: Steve T. <sm...@cb...> - 2012-03-29 16:25:26
|
On Thu, 29 Mar 2012, Brent A Nelson wrote: > Have you tried just disabling the disk cache? It might eliminate the > slowness. Web caches tend to consist of very large numbers of mostly small > files; that's not the most efficient situation, especially for typical > network filesystems... Yes, turning off any caching does not really help very much. It appears to be the bookmarks cache (places.sqlite) that is a major source of bottleneck. At least for now I have the users off my back :) Steve |
From: Dr. M. J. C. <mj...@av...> - 2012-03-30 11:53:44
|
On 03/30/2012 06:49 AM, Michał Borychowski wrote: > Hi Michael! > > Do you use only SSD drives in the chunkservers? Maybe you would like to > share some speed tests with the users on the group? > > And do you have SSD disk in the master server? How big is your metadata? > Have you noticed any improvements? Michał, I have a small moosefs system holding ~400 GB of data, including user's home folders. The master server always had SSD disks. metadata.mfs is only ~50 MB. Performance was quite disappointing until I removed the two 1TB hard drives in the chunk servers and replace them with four 600 GB SSDs. The improvement in performance was HUGE. For a small system, they are definitely worth the cost. Here is a quick test I did in a live system, comparing a 600 GB SSD in one chunkserver with a 1TB hard drive in the other: http://i.imgur.com/J0wxz.png Both chunkservers were on similar network connections (gigabit ethernet, same switch, jumbo frames). I think LibreOffice was having trouble with the very long fsync times reported on the hard drives, particularly when accessing ~/.libreoffice. I don't know why the fsync times were so dreadful, but the SSDs made that issue go away entirely. Perhaps I could have tweaked something with hdparm, but it was more practical to just swap in the SSDs. The current system is excellent, robust, stable, and even works well with all those applications that use sqlite files, like Firefox and Thunderbird (troublesome on all versions of nfs). For a future feature enhancement, you might consider allowing the admin to specify that certain folders - like /fileserver/home - be assigned to chunks on certain disks, so that home folders could go on SSDs, while bulk data goes on slower disks. - Mike |
From: Ricardo J. B. <ric...@da...> - 2012-03-30 16:10:44
|
El Viernes 30/03/2012, Dr. Michael J. Chudobiak escribió: > On 03/30/2012 06:49 AM, Michał Borychowski wrote: > > Hi Michael! > > > > Do you use only SSD drives in the chunkservers? Maybe you would like to > > share some speed tests with the users on the group? > > > > And do you have SSD disk in the master server? How big is your metadata? > > Have you noticed any improvements? > > Michał, > > I have a small moosefs system holding ~400 GB of data, including user's > home folders. > > The master server always had SSD disks. metadata.mfs is only ~50 MB. > > Performance was quite disappointing until I removed the two 1TB hard > drives in the chunk servers and replace them with four 600 GB SSDs. The > improvement in performance was HUGE. For a small system, they are > definitely worth the cost. > > Here is a quick test I did in a live system, comparing a 600 GB SSD in > one chunkserver with a 1TB hard drive in the other: > > http://i.imgur.com/J0wxz.png Wow, those are really awful numbers for the 1 TB drives. Do those drives happen to have 4 KB physical block size? That combined with unaligned partitions could explain such bad performance. Check for example: http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Dr. M. J. C. <mj...@av...> - 2012-03-30 16:34:26
|
On 03/30/2012 12:10 PM, Ricardo J. Barberis wrote: > Wow, those are really awful numbers for the 1 TB drives. > > Do those drives happen to have 4 KB physical block size? > That combined with unaligned partitions could explain such bad performance. I'm not sure. I had wanted to move to SSDs anyway, so this was the push I needed. I didn't explore optimizing the 1TB disks in detail. - Mike |
From: Steve T. <sm...@cb...> - 2012-03-30 17:41:53
|
On Fri, 30 Mar 2012, Ricardo J. Barberis wrote: > Do those drives happen to have 4 KB physical block size? > That combined with unaligned partitions could explain such bad performance. I always use whole disks combined into RAID sets via hardware raid, so partition alignment is not an issue, and then use ext4 file systems as chunk volumes. Raw I/O performance of the volumes are excellent, and indeed the I/O performance numbers shown by the MFS cgi script are also excellent (both read and write are greater than gigabit bandwidth). I use high-end Dell servers with 3+ GHz processors and dual bonded gigabit links for I/O, but nevertheless the resulting MFS I/O performance, which was good initially when I built a testing setup, has just plummeted with a real-world I/O load on it, to the point where I am getting a lot of complaints. This looks like a show stopper to me, which is somewhat upsetting. I'm not sure what I can do at this point to tune it further. Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: Chris P. <ch...@ec...> - 2012-03-30 18:25:30
|
On 2012/03/30 7:41 PM, Steve Thompson wrote: > On Fri, 30 Mar 2012, Ricardo J. Barberis wrote: > >> Do those drives happen to have 4 KB physical block size? >> That combined with unaligned partitions could explain such bad performance. > I always use whole disks combined into RAID sets via hardware raid, so > partition alignment is not an issue, and then use ext4 file systems as > chunk volumes. Raw I/O performance of the volumes are excellent, and > indeed the I/O performance numbers shown by the MFS cgi script are also > excellent (both read and write are greater than gigabit bandwidth). I use > high-end Dell servers with 3+ GHz processors and dual bonded gigabit links > for I/O, but nevertheless the resulting MFS I/O performance, which was > good initially when I built a testing setup, has just plummeted with a > real-world I/O load on it, to the point where I am getting a lot of > complaints. This looks like a show stopper to me, which is somewhat > upsetting. I'm not sure what I can do at this point to tune it further. Hi Steve Do those servers have bettery backed cache on the raid? If so, are they set to write-back or write-through? I have found that when running on standard SATA disks, the constant fsync is what slows things down tremendously. A battery backed write-back cache on the raid card would help a lot there. Chris |
From: Steve T. <sm...@cb...> - 2012-03-30 18:33:00
|
On Fri, 30 Mar 2012, Chris Picton wrote: > Do those servers have bettery backed cache on the raid? If so, are they set > to write-back or write-through? > > I have found that when running on standard SATA disks, the constant fsync is > what slows things down tremendously. A battery backed write-back cache on > the raid card would help a lot there. Yes, the controllers (Perc 5's and Perc 6's) have battery backup, and the virtual disks are set to write back. Indeed, write through (such as when the battery is doing a learn cycle) is a great deal slower. Steve |
From: Quenten G. <QG...@on...> - 2012-03-31 01:46:12
|
Hi Steve I'm also in the middle of building a simular configuration. Did you happen to consider using maybe FreeBSD with ZFS and a couple of smallish SSD for logs? After thinking about it this morning I imagine this could considerably lower the fsync speed issues. Also as a side note I went though my small test cluster which is 6 machines with 2 disks each (1ru servers) and replaced all of the disks which seemed to have a higher then average fsync than the other disks and this increased my clusters performance considerably and I'm not currently running any raid. I guess this may go without saying however thought I might mention it :) Quenten Grasso -----Original Message----- From: Steve Thompson [mailto:sm...@cb...] Sent: Saturday, 31 March 2012 4:33 AM To: Chris Picton Cc: moo...@li... Subject: Re: [Moosefs-users] SSDs On Fri, 30 Mar 2012, Chris Picton wrote: > Do those servers have bettery backed cache on the raid? If so, are they set > to write-back or write-through? > > I have found that when running on standard SATA disks, the constant fsync is > what slows things down tremendously. A battery backed write-back cache on > the raid card would help a lot there. Yes, the controllers (Perc 5's and Perc 6's) have battery backup, and the virtual disks are set to write back. Indeed, write through (such as when the battery is doing a learn cycle) is a great deal slower. Steve ------------------------------------------------------------------------------ This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve W. <st...@pu...> - 2012-04-11 13:55:09
|
On 03/30/2012 09:45 PM, Quenten Grasso wrote: > Also as a side note I went though my small test cluster which is 6 machines with 2 disks each (1ru servers) and replaced all of the disks which seemed to have a higher then average fsync than the other disks and this increased my clusters performance considerably and I'm not currently running any raid. I guess this may go without saying however thought I might mention it :) Is this a common problem... having disks (all the same model from the same manufacturer) which exhibit very different fsync and read/write performance? I've had to replace a few disks (they're Hitachi 3TB HDS723030ALA640 drives) and the ones I've replaced have fairly dismal performance compared to the other disks in the server. For example, the daily average read transfer speeds range from 58MB/s to 72MB/s but on the replacement drive it is 29MB/s. Similarly, write speeds range from 3.4MB/s to 3.8MB/s (but one outlier is giving 2.8MB/s) and the replacement drive is showing 2.3MB/s. Average fsync times range between 50,675us and 58,882us while the replacement drive shows 76,516us. Since the numbers were so far off from the average, I replaced one drive two more times with the same results. Could it just be that different batches of the same disk drive have such widely varying characteristics? Or is there something in how MooseFS rebalances that could be causing this? When I replace a drive, I don't mark it for removal first; I just leave myself undergoal for a while until all necessary chunks are replicated to the replaced drive. Thanks, Steve > > Quenten Grasso |
From: Steve T. <sm...@cb...> - 2012-05-10 21:28:57
|
On Thu, 29 Mar 2012, Steve Thompson wrote: > I have further found that Thunderbird works well, but Firefox is so > painfully slow (glacial) as to be unusable. For the time being, I have had > to relocate the .mozilla directories to a non-MFS file system and replaced > them by symbolic links. Now I have upgraded to 1.6.25, have emptied MFS completely apart from my .mozilla directory. There are now four dedicated chunkservers with a total of 20TB of SATA RAID-5 file systems formatted with ext4, and all four are connected to a common HP Procurve switch using dual bonded balance-alb gigabit links, dedicated to MFS, with MTU=1500. The master is running on one of the chunkservers. "hdparm -t" gives me about 400 MB/sec. Firefox is still so painfully slow as to be unuseable. It takes something like 30-45 minutes to start firefox, and several minutes to click on a link. With .mozilla in an NFS mounted file system from the same disks, firefox starts immediately, so it doesn't look like hardware. Copying a large file into MFS gets me something like 80-85 MB/sec (physically twice that with goal=2) so I am at a loss to explain the dismal performance with firefox. I could really use some ideas, as I have no idea where to go next. Steve |
From: Dr. M. J. C. <mj...@av...> - 2012-05-11 10:01:22
|
On 05/10/2012 05:28 PM, Steve Thompson wrote: > On Thu, 29 Mar 2012, Steve Thompson wrote: > >> I have further found that Thunderbird works well, but Firefox is so >> painfully slow (glacial) as to be unusable. For the time being, I have had >> to relocate the .mozilla directories to a non-MFS file system and replaced >> them by symbolic links. ... > Copying a large file into MFS gets me something like 80-85 MB/sec > (physically twice that with goal=2) so I am at a loss to explain the > dismal performance with firefox. I could really use some ideas, as I have > no idea where to go next. I would focus on the sqlite files that firefox uses. sqlite is notorious for causing problems on remote filesystems (particularly NFS). "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps call fsync much more frequently than others. - Mike |
From: Steve W. <st...@pu...> - 2012-05-11 15:58:14
|
On 05/11/2012 05:36 AM, Dr. Michael J. Chudobiak wrote: > On 05/10/2012 05:28 PM, Steve Thompson wrote: >> On Thu, 29 Mar 2012, Steve Thompson wrote: >> >>> I have further found that Thunderbird works well, but Firefox is so >>> painfully slow (glacial) as to be unusable. For the time being, I have had >>> to relocate the .mozilla directories to a non-MFS file system and replaced >>> them by symbolic links. > ... >> Copying a large file into MFS gets me something like 80-85 MB/sec >> (physically twice that with goal=2) so I am at a loss to explain the >> dismal performance with firefox. I could really use some ideas, as I have >> no idea where to go next. > I would focus on the sqlite files that firefox uses. sqlite is notorious > for causing problems on remote filesystems (particularly NFS). > "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). > > Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps > call fsync much more frequently than others. > > - Mike I've been chasing similar problems the past day or two and have found that the sqlite files created by Firefox to be a real problem (e.g., cookies.sqlite-wal, cookies.sqlite-shm, urlclassifier3.sqlite, places.sqlite-wal). Also, I've had occasional problems with a user's ~/.xsession-errors causing a lot of I/O activity. Notice the number of messages sent (and dropped) to syslog: May 11 10:00:37 maverick mfsmount[4749]: file: 75074, index: 3 - fs_writechunk returns status 11 May 11 10:00:38 mfsmount[4749]: last message repeated 199 times May 11 10:00:38 maverick rsyslogd-2177: imuxsock begins to drop messages from pid 4749 due to rate-limiting May 11 10:00:39 maverick mfschunkserver[2437]: testing chunk: /mfs/01/C2/chunk_00000000000F4DC2_00000001.mfs May 11 10:00:44 maverick rsyslogd-2177: imuxsock lost 17465 messages from pid 4749 due to rate-limiting Additionally, gvfsd-metadata has problems using shared network storage (I think it creates a memory mapped file). I finally removed the execute bit from the gvfsd-metadata permissions to prevent from running. This solved a problem I had with very high chunk deletion dramatically slowing down the MFS storage system. In each of these cases, it's not the fault of MooseFS but applications that don't properly handle network storage systems. Steve |
From: Dr. M. J. C. <mj...@av...> - 2012-05-11 17:06:12
|
> Additionally, gvfsd-metadata has problems using shared network storage > (I think it creates a memory mapped file). I finally removed the execute > bit from the gvfsd-metadata permissions to prevent from running. This > solved a problem I had with very high chunk deletion dramatically > slowing down the MFS storage system. Is there a bug report against gvfs for that? Maybe related to this: http://bugzilla.redhat.com/show_bug.cgi?id=561904 - Mike |
From: Steve W. <st...@pu...> - 2012-05-11 17:11:26
|
On 05/11/2012 01:06 PM, Dr. Michael J. Chudobiak wrote: >> Additionally, gvfsd-metadata has problems using shared network storage >> (I think it creates a memory mapped file). I finally removed the execute >> bit from the gvfsd-metadata permissions to prevent from running. This >> solved a problem I had with very high chunk deletion dramatically >> slowing down the MFS storage system. > Is there a bug report against gvfs for that? > > Maybe related to this: > http://bugzilla.redhat.com/show_bug.cgi?id=561904 > > - Mike > Yes, that bug report was one that helped me pinpoint my problem and implement a work-around. Steve |
From: Steve T. <sm...@cb...> - 2012-05-15 22:35:06
|
On Fri, 11 May 2012, Dr. Michael J. Chudobiak wrote: > I would focus on the sqlite files that firefox uses. sqlite is notorious for > causing problems on remote filesystems (particularly NFS). > "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). Indeed it is the sqlite files that causes the problems. The places.sqlite file isn't even recognized; I get no bookmarks or browsing history at all when running from MFS. There are also hundreds of cookies.sqlite-journal files that show up in the metadata trash folder. I have no trouble with NFS, though. I don't seem to be able to make any progress with this, unfortunately. BTW, I can run Bonnie++ in the MFS file system and get a sequential write performance of 95 MB/sec. This is similar to the write performance I can get with NFS. > Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps call > fsync much more frequently than others. I rebuilt, for testing purposes, an mfschunkserver with the fsync() calls disabled, and verified this via CGI (fsync times of 1 microsecond). It makes a small amount of difference, but firefox is still unusable. It seems that my goal of having home directories in MFS is not going to be workable. And I can see that Luster and glusterfs users are having the same problems. Steve |
From: Dr. M. J. C. <mj...@av...> - 2012-05-15 23:19:45
|
> makes a small amount of difference, but firefox is still unusable. It > seems that my goal of having home directories in MFS is not going to be > workable. And I can see that Luster and glusterfs users are having the > same problems. Hmm. It works OK for me on my Fedora 16 systems. I think I have a stock setup, except I've added "ignoregid" to /etc/mfsexports.cfg: * / rw,alldirs,ignoregid,maproot=0 and boosted: CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 15 I couldn't make F16 + NFSv4 or glusterfs work for home folders. moosefs has been the smoothest system so far. I realize that doesn't help, but it's another data point... - Mike |
From: Steve T. <sm...@cb...> - 2012-05-15 23:56:51
|
On Tue, 15 May 2012, Dr. Michael J. Chudobiak wrote: >> makes a small amount of difference, but firefox is still unusable. It >> seems that my goal of having home directories in MFS is not going to be >> workable. And I can see that Luster and glusterfs users are having the >> same problems. > > Hmm. It works OK for me on my Fedora 16 systems. I think I have a stock > setup, except I've added "ignoregid" to /etc/mfsexports.cfg: > > * / rw,alldirs,ignoregid,maproot=0 > > and boosted: > > CHUNKS_WRITE_REP_LIMIT = 5 > CHUNKS_READ_REP_LIMIT = 15 > > I couldn't make F16 + NFSv4 or glusterfs work for home folders. moosefs has > been the smoothest system so far. I realize that doesn't help, but it's > another data point... Mike, That is useful to know, especially with regard to glusterfs. I have also made the same two changes, along with using all high-end-ish hardware and a dedicated chunkserver network with dual bonded links. I realize that several people have this working OK, but I have no idea why it doesn't work for me :-( Thanks, Steve |
From: Quenten G. <QG...@on...> - 2012-05-16 00:04:00
|
Out of interest have you tried setting the goal of the fox/bird cache folder to 1? Quenten, -----Original Message----- From: Dr. Michael J. Chudobiak [mailto:mj...@av...] Sent: Wednesday, 16 May 2012 9:20 AM To: moo...@li... Subject: Re: [Moosefs-users] fox and bird > makes a small amount of difference, but firefox is still unusable. It > seems that my goal of having home directories in MFS is not going to be > workable. And I can see that Luster and glusterfs users are having the > same problems. Hmm. It works OK for me on my Fedora 16 systems. I think I have a stock setup, except I've added "ignoregid" to /etc/mfsexports.cfg: * / rw,alldirs,ignoregid,maproot=0 and boosted: CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 15 I couldn't make F16 + NFSv4 or glusterfs work for home folders. moosefs has been the smoothest system so far. I realize that doesn't help, but it's another data point... - Mike ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |