You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: wkmail <wk...@bn...> - 2012-05-17 01:00:23
|
A previously stable cluster went crazy and we had to bring everything down and then backup to recover. Does this indicate a NETWORK issue of some sort or is something else going on. The server is recovering from a failed disk from a few days ago and we had also designed another drive for removal, so it is busy fixing two seperate issues. On the chunkserver logs we see this. May 16 15:16:50 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 15:57:37 mfs1chunker5 mfschunkserver[1399]: replicator: got status: 19 from (C0A80017:24CE) May 16 16:17:34 mfs1chunker5 mfschunkserver[1399]: replicator: got status: 19 from (C0A80017:24CE) May 16 16:38:40 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 16:39:15 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 16:45:35 mfs1chunker5 mfschunkserver[1399]: replicator: connect timed out May 16 16:46:05 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 16:46:24 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 16:49:28 mfs1chunker5 mfschunkserver[1399]: (write) write error: ECONNRESET (Connection reset by peer) May 16 17:24:51 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 17:24:57 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out May 16 17:25:02 mfs1chunker5 mfschunkserver[1399]: replicator: connect timed out May 16 17:39:55 mfs1chunker5 mfschunkserver[1399]: replicator: receive timed out on the mfsmaster logs we see entries like this May 16 17:14:43 mfs1master mfsmaster[32522]: (192.168.0.24:9422) chunk: 0000000000A59D5B replication status: 28 May 16 17:14:45 mfs1master mfsmaster[32522]: (192.168.0.27:9422) chunk: 0000000000C8D906 replication status: 28 May 16 17:14:45 mfs1master mfsmaster[32522]: (192.168.0.22:9422) chunk: 0000000000CD6FCD replication status: 28 May 16 17:14:47 mfs1master mfsmaster[32522]: (192.168.0.25:9422) chunk: 000000000024AB78 replication status: 28 May 16 17:14:49 mfs1master mfsmaster[32522]: (192.168.0.21:9422) chunk: 0000000000CC7DEA replication status: 28 May 16 17:14:49 mfs1master mfsmaster[32522]: (192.168.0.24:9422) chunk: 00000000001A7DEA replication status: 28 May 16 17:14:52 mfs1master mfsmaster[32522]: (192.168.0.27:9422) chunk: 000000000027B995 replication status: 28 May 16 17:14:52 mfs1master mfsmaster[32522]: (192.168.0.22:9422) chunk: 00000000002BB995 replication status: 28 May 16 17:14:54 mfs1master mfsmaster[32522]: (192.168.0.25:9422) chunk: 0000000000258C07 replication status: 28 May 16 17:14:54 mfs1master mfsmaster[32522]: (192.168.0.21:9422) chunk: 0000000000238C07 replication status: 28 May 16 17:14:56 mfs1master mfsmaster[32522]: (192.168.0.24:9422) chunk: 0000000000205E79 replication status: 26 May 16 17:14:58 mfs1master mfsmaster[32522]: (192.168.0.22:9422) chunk: 00000000005330EB replication status: 26 May 16 17:14:59 mfs1master mfsmaster[32522]: (192.168.0.27:9422) chunk: 0000000000579A24 replication status: 26 May 16 17:14:59 mfs1master mfsmaster[32522]: (192.168.0.25:9422) chunk: 0000000000C89A24 replication status: 26 |
From: wkmail <wk...@bn...> - 2012-05-16 21:34:47
|
We are building another MFS cluster. What is the current thought regarding File systems for Chunkers using older retired Opteron 250 servers (SATA2) each with 2GB RAM. We would use Scientific Linux 6.x which defaults to EXT4 so that is the first thing to come to mind, but reviews are mixed. Some really push EXT3 http://source.yeeyan.org/view/353552_f0d/Linux%20filesystems%20benchmarked:%20EXT3%20vs%20EXT4%20vs%20XFS%20vs%20BTRFS and since the chunkserver environment is decidedly single purpose we are leaning in that direction. Note: we aren't terribly excited about XFS as we have had some issues with that in the past, but if someone feels strongly about it... Sincerely, -wk |
From: Steve T. <sm...@cb...> - 2012-05-16 16:29:13
|
On Wed, 16 May 2012, Allen, Benjamin S wrote: > Maybe a naive question, but could you all just symlink or bind mount .mozilla to something > like /tmp/<user>_mozilla for each user? Say in /etc/profile or add the symlink to /etc/skel > for new users. That is essentially what I am doing now. |
From: Steve W. <st...@pu...> - 2012-05-16 16:27:20
|
On 05/16/2012 12:19 PM, Allen, Benjamin S wrote: > Maybe a naive question, but could you all just symlink or bind mount > .mozilla to something like /tmp/<user>_mozilla for each user? Say in > /etc/profile or add the symlink to /etc/skel for new users. > > I know this complicates the setup a bit. FF's history and what not > won't be shared across hosts, but it should solve the performance issue. > > Ben No, it's not a naive question/suggestion and it may be the route we'll need to take... or move all the users to Chromium. :-) Some users have reported similar problems with FF on NFS-mounted home directories and one suggested work-around was to do exactly what you recommend or to use a RAM disk ( https://wiki.archlinux.org/index.php/Firefox_Ramdisk). Steve |
From: Allen, B. S <bs...@la...> - 2012-05-16 16:19:34
|
Maybe a naive question, but could you all just symlink or bind mount .mozilla to something like /tmp/<user>_mozilla for each user? Say in /etc/profile or add the symlink to /etc/skel for new users. I know this complicates the setup a bit. FF's history and what not won't be shared across hosts, but it should solve the performance issue. Ben On May 16, 2012, at 10:05 AM, Steve Wilson wrote: On 05/16/2012 11:37 AM, Dr. Michael J. Chudobiak wrote: I have just discovered that firefox 3.6.26, which I have on one of my machines, works perfectly with MFS. The version of firefox that I have been using for most of my testing, 10.0.4, does not work at all with MFS. I'd appreciate it if anyone is using firefox with MFS and has no problems, please document the version of firefox that you are using. And I caution you against upgrading! I believe FF4 is where the heavy use of sqlite profile files was added. FF3 is ancient now... I'm using FF12. - Mike We are also using FF12 and have been seeing problems mainly with the SQlite write-ahead logging files: places.sqlite-wal places.sqlite-shm cookies.sqlite-wal cookies.sqlite-shm The user is only logged in from one workstation. Several times per day I'll get a spike of write attempts on one of these files as seen in the system log: May 15 20:41:20 iceman mfsmount[5084]: file: 2656491, index: 0 - fs_writechunk returns status 11 May 15 20:41:21 iceman rsyslogd-2177: imuxsock lost 142624 messages from pid 5084 due to rate-limiting Steve ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li...<mailto:moo...@li...> https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve T. <sm...@cb...> - 2012-05-16 16:10:47
|
On Wed, 16 May 2012, Dr. Michael J. Chudobiak wrote: > You don't have multiple firefox instances accessing the same profile, do > you? Or some other multiple-access scenario? No, it's just a straightforward single user (me) scenario. |
From: Steve W. <st...@pu...> - 2012-05-16 16:05:59
|
On 05/16/2012 11:37 AM, Dr. Michael J. Chudobiak wrote: >> I have just discovered that firefox 3.6.26, which I have on one of my >> machines, works perfectly with MFS. The version of firefox that I have >> been using for most of my testing, 10.0.4, does not work at all with MFS. >> >> I'd appreciate it if anyone is using firefox with MFS and has no problems, >> please document the version of firefox that you are using. And I caution >> you against upgrading! > I believe FF4 is where the heavy use of sqlite profile files was added. > FF3 is ancient now... > > I'm using FF12. > > - Mike We are also using FF12 and have been seeing problems mainly with the SQlite write-ahead logging files: places.sqlite-wal places.sqlite-shm cookies.sqlite-wal cookies.sqlite-shm The user is only logged in from one workstation. Several times per day I'll get a spike of write attempts on one of these files as seen in the system log: May 15 20:41:20 iceman mfsmount[5084]: file: 2656491, index: 0 - fs_writechunk returns status 11 May 15 20:41:21 iceman rsyslogd-2177: imuxsock lost 142624 messages from pid 5084 due to rate-limiting Steve |
From: Dr. M. J. C. <mj...@av...> - 2012-05-16 15:46:06
|
>> Out of interest have you tried setting the goal of the fox/bird cache folder to 1? > > Well it was a good idea but unfortunately it doesn't make any difference > at all :-( You don't have multiple firefox instances accessing the same profile, do you? Or some other multiple-access scenario? - Mike |
From: Dr. M. J. C. <mj...@av...> - 2012-05-16 15:37:30
|
> I have just discovered that firefox 3.6.26, which I have on one of my > machines, works perfectly with MFS. The version of firefox that I have > been using for most of my testing, 10.0.4, does not work at all with MFS. > > I'd appreciate it if anyone is using firefox with MFS and has no problems, > please document the version of firefox that you are using. And I caution > you against upgrading! I believe FF4 is where the heavy use of sqlite profile files was added. FF3 is ancient now... I'm using FF12. - Mike |
From: Steve T. <sm...@cb...> - 2012-05-16 14:48:41
|
On Wed, 16 May 2012, Steve Thompson wrote: > Well it was a good idea but unfortunately it doesn't make any difference > at all :-( Aha! I have just discovered that firefox 3.6.26, which I have on one of my machines, works perfectly with MFS. The version of firefox that I have been using for most of my testing, 10.0.4, does not work at all with MFS. I'd appreciate it if anyone is using firefox with MFS and has no problems, please document the version of firefox that you are using. And I caution you against upgrading! Steve |
From: Steve T. <sm...@cb...> - 2012-05-16 14:25:55
|
On Wed, 16 May 2012, Quenten Grasso wrote: > Out of interest have you tried setting the goal of the fox/bird cache folder to 1? Well it was a good idea but unfortunately it doesn't make any difference at all :-( Steve |
From: Karol P. <kar...@ar...> - 2012-05-16 12:48:39
|
eh, I didn't read to the end of your mail, so nvm... On 05/16/2012 02:38 PM, Karol Pasternak wrote: > Hi, > > it goes over network, probably this could be bottleneck in your benchmark. > > Karol > > On 05/16/2012 02:14 PM, Stefan Priebe - Profihost AG wrote: >> Hi list, >> >> i'm just stress testing moosefs a bit with tiobench and bonnie++. I'm >> wondering why the rewrite test of bonnie++ is slow with moosefs. All my >> 4 chunkservers just have a cpu load of max. 20% and network load of 5%. >> >> I'm just getting about 20Mb/s. Even all my SSDs show up with a speed of >> 140-180 Mb/s. >> >> Any ideas? >> >> Greets Stefan >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats.http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users -- Pozdrawiam/Best Regards *Karol Pasternak **Linux Administrator | ARBOmedia* *_ka...@ar..._*** T +48 22 592 49 95 | M +48 785 555 184 * ARBOmedia Polska | *Altowa 2 | 02-386 Warszawa* www.arbomedia.pl* <http://www.arbomedia.pl/>** | T +48 22 592 45 00 | F 22 448 71 63 ARBOmedia Polska Sp. z o.o., zarejestrowana w Sa;dzie Rejonowym dla m.st. Warszawy w Warszawie, XII Wydzia? Gospodarczy Krajowego Rejestru Sa;dowego, pod numerem KRS: 0000181362, kapita? zak?adowy wynosi 600 000 z?, NIP: 527-23-12-331 |
From: Karol P. <kar...@ar...> - 2012-05-16 12:38:18
|
Hi, it goes over network, probably this could be bottleneck in your benchmark. Karol On 05/16/2012 02:14 PM, Stefan Priebe - Profihost AG wrote: > Hi list, > > i'm just stress testing moosefs a bit with tiobench and bonnie++. I'm > wondering why the rewrite test of bonnie++ is slow with moosefs. All my > 4 chunkservers just have a cpu load of max. 20% and network load of 5%. > > I'm just getting about 20Mb/s. Even all my SSDs show up with a speed of > 140-180 Mb/s. > > Any ideas? > > Greets Stefan > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Stefan P. - P. AG <s.p...@pr...> - 2012-05-16 12:14:52
|
Hi list, i'm just stress testing moosefs a bit with tiobench and bonnie++. I'm wondering why the rewrite test of bonnie++ is slow with moosefs. All my 4 chunkservers just have a cpu load of max. 20% and network load of 5%. I'm just getting about 20Mb/s. Even all my SSDs show up with a speed of 140-180 Mb/s. Any ideas? Greets Stefan |
From: Steve T. <sm...@cb...> - 2012-05-16 00:10:03
|
On Wed, 16 May 2012, Quenten Grasso wrote: > Out of interest have you tried setting the goal of the fox/bird cache folder to 1? Wow, that is a good idea. No, I have not done that, but I will do so first thing tomorrow morning and will report back. Steve -- ---------------------------------------------------------------------------- Steve Thompson, Cornell School of Chemical and Biomolecular Engineering smt AT cbe DOT cornell DOT edu "186,282 miles per second: it's not just a good idea, it's the law" ---------------------------------------------------------------------------- |
From: Quenten G. <QG...@on...> - 2012-05-16 00:04:00
|
Out of interest have you tried setting the goal of the fox/bird cache folder to 1? Quenten, -----Original Message----- From: Dr. Michael J. Chudobiak [mailto:mj...@av...] Sent: Wednesday, 16 May 2012 9:20 AM To: moo...@li... Subject: Re: [Moosefs-users] fox and bird > makes a small amount of difference, but firefox is still unusable. It > seems that my goal of having home directories in MFS is not going to be > workable. And I can see that Luster and glusterfs users are having the > same problems. Hmm. It works OK for me on my Fedora 16 systems. I think I have a stock setup, except I've added "ignoregid" to /etc/mfsexports.cfg: * / rw,alldirs,ignoregid,maproot=0 and boosted: CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 15 I couldn't make F16 + NFSv4 or glusterfs work for home folders. moosefs has been the smoothest system so far. I realize that doesn't help, but it's another data point... - Mike ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Steve T. <sm...@cb...> - 2012-05-15 23:56:51
|
On Tue, 15 May 2012, Dr. Michael J. Chudobiak wrote: >> makes a small amount of difference, but firefox is still unusable. It >> seems that my goal of having home directories in MFS is not going to be >> workable. And I can see that Luster and glusterfs users are having the >> same problems. > > Hmm. It works OK for me on my Fedora 16 systems. I think I have a stock > setup, except I've added "ignoregid" to /etc/mfsexports.cfg: > > * / rw,alldirs,ignoregid,maproot=0 > > and boosted: > > CHUNKS_WRITE_REP_LIMIT = 5 > CHUNKS_READ_REP_LIMIT = 15 > > I couldn't make F16 + NFSv4 or glusterfs work for home folders. moosefs has > been the smoothest system so far. I realize that doesn't help, but it's > another data point... Mike, That is useful to know, especially with regard to glusterfs. I have also made the same two changes, along with using all high-end-ish hardware and a dedicated chunkserver network with dual bonded links. I realize that several people have this working OK, but I have no idea why it doesn't work for me :-( Thanks, Steve |
From: Dr. M. J. C. <mj...@av...> - 2012-05-15 23:19:45
|
> makes a small amount of difference, but firefox is still unusable. It > seems that my goal of having home directories in MFS is not going to be > workable. And I can see that Luster and glusterfs users are having the > same problems. Hmm. It works OK for me on my Fedora 16 systems. I think I have a stock setup, except I've added "ignoregid" to /etc/mfsexports.cfg: * / rw,alldirs,ignoregid,maproot=0 and boosted: CHUNKS_WRITE_REP_LIMIT = 5 CHUNKS_READ_REP_LIMIT = 15 I couldn't make F16 + NFSv4 or glusterfs work for home folders. moosefs has been the smoothest system so far. I realize that doesn't help, but it's another data point... - Mike |
From: Steve T. <sm...@cb...> - 2012-05-15 22:35:06
|
On Fri, 11 May 2012, Dr. Michael J. Chudobiak wrote: > I would focus on the sqlite files that firefox uses. sqlite is notorious for > causing problems on remote filesystems (particularly NFS). > "urlclassifier3.sqlite" in particular grows to be very large (~64 MB). Indeed it is the sqlite files that causes the problems. The places.sqlite file isn't even recognized; I get no bookmarks or browsing history at all when running from MFS. There are also hundreds of cookies.sqlite-journal files that show up in the metadata trash folder. I have no trouble with NFS, though. I don't seem to be able to make any progress with this, unfortunately. BTW, I can run Bonnie++ in the MFS file system and get a sequential write performance of 95 MB/sec. This is similar to the write performance I can get with NFS. > Are the fsync times reported by mfs.cgi (under "disks") OK? Some apps call > fsync much more frequently than others. I rebuilt, for testing purposes, an mfschunkserver with the fsync() calls disabled, and verified this via CGI (fsync times of 1 microsecond). It makes a small amount of difference, but firefox is still unusable. It seems that my goal of having home directories in MFS is not going to be workable. And I can see that Luster and glusterfs users are having the same problems. Steve |
From: Elliot F. <efi...@gm...> - 2012-05-15 14:50:15
|
MFS 1.6.20. We haven't had any problems accessing MFS filesystems via FreeBSD, but as you stated, there is an issue with re-export on FreeBSD (Fuse related). Also, we are unable to mount a MFS volume via fstab, so we just mount them via script in /usr/local/etc/rc.d. root:~#>mfsmount /mfsroot -H fs1-master.etv.net -S / -p MFS Password: mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:wheel root:~#>ls /mfsroot ./ fileshare/ syncrify_backups/ ../ netbootclients/ vmware/ etv10/ proxmox/ windows_backups/ exchange_archive/ proxmoxv2/ root:~#>df -h /mfsroot Filesystem Size Used Avail Capacity Mounted on /dev/fuse0 275T 92T 182T 34% /mfsroot root:~#>uname -a FreeBSD fs1-chunkserver6.etv.net 8.2-STABLE FreeBSD 8.2-STABLE #1: Thu Feb 2 10:39:53 MST 2012 ro...@fs...:/usr/obj/usr/src/sys/GENERIC amd64 On Mon, May 14, 2012 at 4:45 PM, Atom Powers <ap...@di...> wrote: > On 05/14/2012 02:54 PM, Elliot Finley wrote: >> >> We have FreeBSD machines mounting MooseFS directly. Never had a >> problem with it. > > > Good to hear. What version of FreeBSD, MooseFS are you using? > > I had a small set of problems related to FUSE when I was doing my testing on > FreeBSD 8 and Mfs 1.6.20. IIR, it was relatively minor problems such as not > mounting automatically on boot and not being able to re-export via NFS; but > there must have also been something big enough to make it worth the effort > to use an NFS exporter. I just wish I could remember what it was. > > > -- > -- > Perfection is just a word I use occasionally with mustard. > --Atom Powers-- > Director of IT > DigiPen Institute of Technology > +1 (425) 895-4443 |
From: Stefan P. - P. AG <s.p...@pr...> - 2012-05-15 11:18:49
|
> I have seen other posts by list members where having different mfs mount > instances for e.g. instead of a single mfsmount for /mfs that contains > /mfs/folder1 /mfs/folder2 have two mfsmount instances for > /mfs/folder1 and /mfs/folder2 > > this of course depends on if your storage is organized into discretely > mountable folders like this. > > and i haven't bothered to try this either. I'm just having ONE Client to stress test. With ONE bonnie instance i get 40Mbit/s squ. writing with 4 instances on ONE client i get 4x 40Mbit/s. that's strange... Stefan |
From: Atom P. <ap...@di...> - 2012-05-14 22:45:51
|
On 05/14/2012 02:54 PM, Elliot Finley wrote: > We have FreeBSD machines mounting MooseFS directly. Never had a > problem with it. Good to hear. What version of FreeBSD, MooseFS are you using? I had a small set of problems related to FUSE when I was doing my testing on FreeBSD 8 and Mfs 1.6.20. IIR, it was relatively minor problems such as not mounting automatically on boot and not being able to re-export via NFS; but there must have also been something big enough to make it worth the effort to use an NFS exporter. I just wish I could remember what it was. -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Elliot F. <efi...@gm...> - 2012-05-14 21:54:38
|
We have FreeBSD machines mounting MooseFS directly. Never had a problem with it. On Fri, May 11, 2012 at 9:55 AM, Atom Powers <ap...@di...> wrote: > Yes, I do this one one of my machines. We have some operating systems > (FreeBSD) that can't mount MooseFS directly. Fortunately there are few > enough of them that I only need one gateway, for now. (I'm using Ubuntu > 10 or 11 on the gateway, but any Linux should work.) > > I also have quite a number of systems that serve as SMB/Samba gateways > for MS Windows systems. > > On 05/11/2012 07:33 AM, Boris Epstein wrote: >> Hello all, >> >> Has anybody every successfully set up a machine that would mount a >> MooseFS as a client and serve that mount point up to to to ohers via NFS? >> > -- > -- > Perfection is just a word I use occasionally with mustard. > --Atom Powers-- > Director of IT > DigiPen Institute of Technology > +1 (425) 895-4443 > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Travis H. <tra...@tr...> - 2012-05-14 20:42:24
|
On 12-05-14 11:50 AM, Stefan Priebe wrote: > Hi, > > we're thinking about replacing our SAN for KVM with a distributed FS > like MooseFS. > > I'm running a small 4 machines MooseFS Cluster with 180GB SSD each and > i'm not able to gain more performance than 77Mbit/s. Is this the limit? > Is there a way to archieve higher throughput? I'm using bonding > 2x1Gbit/s network cards. > > Greets > Stefan > I have seen other posts by list members where having different mfs mount instances for e.g. instead of a single mfsmount for /mfs that contains /mfs/folder1 /mfs/folder2 have two mfsmount instances for /mfs/folder1 and /mfs/folder2 this of course depends on if your storage is organized into discretely mountable folders like this. and i haven't bothered to try this either. Another idea, if you want to reduce network latencies, to use 10GB ethernet ? I am not convinced bonding 2xGB network links would improve things enough to justify the complicated configuration. |
From: Alexander A. <akh...@ri...> - 2012-05-14 20:36:17
|
Hi! You can start one more metalogger. When it starts it downloads all metadata from master. So you can get metadata on metalogger than stop metalogger and try to start master on metalogger host to check if your metadata is consistent. Than if yes - copy metadata from metalogger to real master. Or you can force already running mfsmetalogger to download fresh metadata by kill -HUP <mfsmetalogger PID> wbr Alexander ====================================================== Thanks for hint, but I have still problem with mfsmaster and metadata.mfs.back file which isn't saved. I'm afraid if I stop mfsmaster I will still have the same old file;/ Do you have any idea how to force mfsmaster (whitout stoping it) to save fresh metadata.mfs.back? br. Karol On 05/14/2012 04:46 PM, Adam Ochmanski wrote: W dniu 12-05-14 14:45, Karol Pasternak pisze: Hi all, I have problem with restoring metadata.mfs from mfsmetalloger. # mfsmetarestore -m metadata_ml.mfs.back -o /tmp/metadata.mfs changelog_ml* loading objects (files,directories,etc.) ... ok loading names ... ok loading deletion timestamps ... ok loading chunks data ... ok checking filesystem consistency ... ok connecting files and chunks ... ok found garbage at the end of file: changelog_ml_back.0.mfs (last correct id: 66424694) hole in change files (entries from changelog_ml_back.1.mfs:66475886 to changelog_ml.0.mfs:67436656 are missing) - add more files Anybody had similar problem? How to solve it? Hi, use all changelog from metalogger and master, then do restore. Should help. blink -- Pozdrawiam/Best Regards Karol Pasternak Linux Administrator | ARBOmedia kar...@ar... T +48 22 592 49 95 | M +48 785 555 184 ARBOmedia Polska | Altowa 2 | 02-386 Warszawa www.arbomedia.pl | T +48 22 592 45 00 | F 22 448 71 63 ARBOmedia Polska Sp. z o.o., zarejestrowana w Sadzie Rejonowym dla m.st. Warszawy w Warszawie, XII Wydzial Gospodarczy Krajowego Rejestru Sadowego, pod numerem KRS: 0000181362, kapital zakladowy wynosi 600 000 zl, NIP: 527-23-12-331 |