You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(20) |
Feb
(11) |
Mar
(11) |
Apr
(9) |
May
(22) |
Jun
(85) |
Jul
(94) |
Aug
(80) |
Sep
(72) |
Oct
(64) |
Nov
(69) |
Dec
(89) |
2011 |
Jan
(72) |
Feb
(109) |
Mar
(116) |
Apr
(117) |
May
(117) |
Jun
(102) |
Jul
(91) |
Aug
(72) |
Sep
(51) |
Oct
(41) |
Nov
(55) |
Dec
(74) |
2012 |
Jan
(45) |
Feb
(77) |
Mar
(99) |
Apr
(113) |
May
(132) |
Jun
(75) |
Jul
(70) |
Aug
(58) |
Sep
(58) |
Oct
(37) |
Nov
(51) |
Dec
(15) |
2013 |
Jan
(28) |
Feb
(16) |
Mar
(25) |
Apr
(38) |
May
(23) |
Jun
(39) |
Jul
(42) |
Aug
(19) |
Sep
(41) |
Oct
(31) |
Nov
(18) |
Dec
(18) |
2014 |
Jan
(17) |
Feb
(19) |
Mar
(39) |
Apr
(16) |
May
(10) |
Jun
(13) |
Jul
(17) |
Aug
(13) |
Sep
(8) |
Oct
(53) |
Nov
(23) |
Dec
(7) |
2015 |
Jan
(35) |
Feb
(13) |
Mar
(14) |
Apr
(56) |
May
(8) |
Jun
(18) |
Jul
(26) |
Aug
(33) |
Sep
(40) |
Oct
(37) |
Nov
(24) |
Dec
(20) |
2016 |
Jan
(38) |
Feb
(20) |
Mar
(25) |
Apr
(14) |
May
(6) |
Jun
(36) |
Jul
(27) |
Aug
(19) |
Sep
(36) |
Oct
(24) |
Nov
(15) |
Dec
(16) |
2017 |
Jan
(8) |
Feb
(13) |
Mar
(17) |
Apr
(20) |
May
(28) |
Jun
(10) |
Jul
(20) |
Aug
(3) |
Sep
(18) |
Oct
(8) |
Nov
|
Dec
(5) |
2018 |
Jan
(15) |
Feb
(9) |
Mar
(12) |
Apr
(7) |
May
(123) |
Jun
(41) |
Jul
|
Aug
(14) |
Sep
|
Oct
(15) |
Nov
|
Dec
(7) |
2019 |
Jan
(2) |
Feb
(9) |
Mar
(2) |
Apr
(9) |
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(12) |
Dec
(2) |
2020 |
Jan
(2) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(18) |
Oct
(2) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
|
Sep
(5) |
Oct
(5) |
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: li_ndows <li_...@12...> - 2012-02-21 02:24:32
|
Hello, I want to ask some questions about the struct matocsserventry? 1,what does the array hdrbuff[8] mean? 2,what does the struct packetstruct->packet contain? THX |
From: Michał B. <mic...@ge...> - 2012-02-20 15:45:19
|
Hi Ricardo! Please try to implement these changes and give us feedback whether it helped in your case. You need to open "masterconn.c" file in "mfsmetalogger" folder and in the "masterconn_beforeclose" function you have at the end something like: if (eptr->logfd) { fclose(eptr->logfd); } Please add this line: "eptr->logfs = NULL;" so that the whole block looks like: if (eptr->logfd) { fclose(eptr->logfd); eptr->logfd = NULL; } Regards Michał -----Original Message----- From: Ricardo J. Barberis [mailto:ric...@da...] Sent: Wednesday, February 15, 2012 6:15 PM To: moo...@li... Subject: [Moosefs-users] metalogger segfaults when mfsmaster server hangs/crashes Hi, Today one of our mfsmasters crashed and had to be rebooted. While the master server was hung, the metalogger tried to re-connect but it segfaulted: Feb 15 13:46:15 bkpmds02 mfsmetalogger[1992]: connecting ... Feb 15 13:46:39 bkpmds02 mfsmetalogger[1992]: connection failed, error: EHOSTUNREACH (No route to host) Feb 15 13:46:40 bkpmds02 mfsmetalogger[1992]: connecting ... Feb 15 13:46:41 bkpmds02 kernel: mfsmetalogger[1992]: segfault at 0000000000000060 rip 0000003536c612ed rsp 00007ffff2247a10 error 4 At that time, from the master logs you can see it was hung: Feb 15 13:45:00 bkpmds01 mfsmaster[2437]: total: usedspace: 30279830740992 (28200.29 GiB), totalspace: 32242667642880 (30028.32 GiB), usage: 93.91% Feb 15 13:49:36 bkpmds01 syslogd 1.4.1: restart. Feb 15 13:49:36 bkpmds01 kernel: klogd 1.4.1, log source = /proc/kmsg started. Feb 15 13:49:36 bkpmds01 kernel: Linux version 2.6.18-274.17.1.el5 (moc...@bu...) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Tue Jan 10 17:25:58 EST 2012 Both servers have their time synchronized via ntp. both are CentOS 5.7 64 bits with mfs 1.6.20 installed from RepoForge.org. I'm still investigating the cause of the master hang, as nothing gets logged (I'm thinking RAM or CPU problems) but I'm reporting the segfault in case it can be debugged. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ ---------------------------------------------------------------------------- -- Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2012-02-20 15:39:39
|
Hi! With the below instructions you can increase timeout on chunkservers and mfsmounts. Please give them a try and come back to us with the results. 1. csserv.c file in mfschunkserver: // connection timeout in seconds #define CSSERV_TIMEOUT 5 change 5 to eg. 10 2. cscomm.c file in mfsmount: #define CSMSECTIMEOUT 5000 change 5000 to eg. 10000 Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Sext [mailto:se...@aw...] Sent: Wednesday, February 15, 2012 7:27 PM To: moo...@li... Subject: [Moosefs-users] Question about ECONNRESET (Connection reset by peer) Hello Could you please tell me in which cases there is an error: Feb 15 18:09:25 mfsmount[2174]: readblock; tcpread error: ECONNRESET (Connection reset by peer) Feb 15 18:09:25 mfsmount[2174]: file: 316275, index: 0, chunk: 134391, version: 1, cs: C0A8015D:9422 - readblock error (try counter: 3) At rush hour, some of our servers are beginning to slow down and write to a log of these messages. What we need to check first? Could there be a problem with the network cards or a problem on the switch? We use the MFS for streaming video. All hard drives loaded with no more than 40%. On each Chunkserver idle ~ 50%. Kind regards, Vadik Sext ---------------------------------------------------------------------------- -- Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Quenten G. <QG...@on...> - 2012-02-18 10:26:06
|
Hi Elliot, Thanks for the reply, I've been considering using 3TB Disks with ZFS in a 12/16 disk chassie with 2 x 6/8 disk raidz2 and using MFS for HA/speed only by using a replication factor of 2. I guess the pros and cons so far i've worked out are, In a distrubuted JBOD configuration which mfs seems to like, So I would have to use a replication factor of 3, I believe which would give us 3 x the High Avaliablity and higher recoverablity from bit rot/disk failure or something alike. However In this example if we use a replication factor of 3 for example 2TB Of data, Using the MFS goal of 3 I need to store 6TB vs using a ZFS zpool I would only need to store 4TB, now multiply that by 10 and we are at 60TB for 20TB of data vs 40TB, which as you can imagine adds up very quickly. I also would like to have a 2nd offiste replica at another datacentre and now we can muliply the sorage requirements for 20TB of data by 2. 120 vs 80 Using the ZFS as I see it would protect us from Bit Rot/Bad Sectors & Failed Drives also reducing rebuild times as it would be handled by zfs and using MFS for high avaliblity and replication / speed (striping) what do you think? On another note Are you using NFS or ISCSI targets? From the mfs share if I do a "dd if=/dev/zero of=ddfile bs=32k count=100000" i get around ~70mb/s however when I use iscsi or nfs i'm only getting 10-18mb/s. Our dev config which is 4 servers with 1 x 500gb sata drive and a 5th as metadata Server with ubuntu 11 with ext4 for our disk fs. Also Tried using FreeBSD (freenas w/pkg_add -r moosefs-client) which didn't seem to make any difference except I couldnt use NFS on that setup. Cheers, Quenten |
From: Atom P. <ap...@di...> - 2012-02-17 22:20:40
|
Robert, your advice helped me design my MFS system. One more thing I would add: chunk replication and client caching had a large effect on the performance of my clients. I have a dedicated mfsmaster with a fast CPU and plenty of RAM, an mfsmount for each service that uses mfs (1-8 per host, Ubuntu), and each chunkservers has one large disk (FreeBSD,zfs). I had originally increased my chunk replication from write-1,read-5 to write-5,read-15 while I was adding chunkservers to the system. Once replication was done and I had added many clients (~12) I noticed that listing files on mfs mounts was slow, 10-20 seconds for a simple 'ls'. The master charts showed no change in CPU or anything else before and after the slowness started. Reducing the replication back to 1,5 restored performance. I still don't know why; all chunkservers were already balanced. Also, enabling 'mfscachefiles' on the clients made a huge improvement on CPU use on the mfsmaster. On 02/17/2012 06:16 AM, Robert Sandilands wrote: > Hi Greg, > > I have isolated several issues: > > 1. mfsmaster is single threaded. This implies that if it has to share > time with any other processes on the same machine then performance can > suffer significantly. > 2. mfsmount has a single TCP connection to mfsmaster. This implies that > most accesses to all the files on a mount has to share a single TCP > connection to the master. This is a significant bottleneck. > 3. mfschunkserver has 10 threads doing work for all the mfsmount's and > 10 threads doing work for mfsmaster. This easily gets overwhelmed on a > busy system. > > Running mfsmaster on a dedicated machine helps with the first issue. > > Running multiple copies of mfsmount helps with the second issue. > > Limiting the number of disks managed by a specific mfschunkserver > instance helps with the third issue. > > My guess is that you should have one instance of mfsmount for about > every 20 simultaneous accesses. You also should have 1 instance of > mfschunkserver for every 10 disks (spindles). A dedicated mfsmaster is > also a very good idea. > -- -- Perfection is just a word I use occasionally with mustard. --Atom Powers-- Director of IT DigiPen Institute of Technology +1 (425) 895-4443 |
From: Robert S. <rsa...@ne...> - 2012-02-17 14:16:30
|
Hi Greg, I have isolated several issues: 1. mfsmaster is single threaded. This implies that if it has to share time with any other processes on the same machine then performance can suffer significantly. 2. mfsmount has a single TCP connection to mfsmaster. This implies that most accesses to all the files on a mount has to share a single TCP connection to the master. This is a significant bottleneck. 3. mfschunkserver has 10 threads doing work for all the mfsmount's and 10 threads doing work for mfsmaster. This easily gets overwhelmed on a busy system. Running mfsmaster on a dedicated machine helps with the first issue. Running multiple copies of mfsmount helps with the second issue. Limiting the number of disks managed by a specific mfschunkserver instance helps with the third issue. My guess is that you should have one instance of mfsmount for about every 20 simultaneous accesses. You also should have 1 instance of mfschunkserver for every 10 disks (spindles). A dedicated mfsmaster is also a very good idea. Robert On 2/17/12 4:43 AM, Palak, Grzegorz (NSN - PL/Wroclaw) wrote: > @Robert > What mfsmount bottlenecks you have on mind? > Have you noticed any performance related bottlenecks? > > Greg > > -----Original Message----- > From: ext Robert Sandilands [mailto:rsa...@ne...] > Sent: Tuesday, February 14, 2012 12:56 AM > To: moo...@li... > Subject: Re: [Moosefs-users] Question re: win32 client/access > > I have to disagree. > > mfsmaster is very sensitive to load. You really want to run a dedicated > master server. Setting up a (virtual) machine or even multiple (virtual) > > machines to mount the file system is a better idea. > > Ensure that you disable all asynchronous file access in Samba. FUSE and > sendfile() is not a great combination. > > I would also recommend having a mount per SMB share. > > For example if you plan to have 3 shares called "tools", "docs" and > "tmp" then I would mount MooseFS on three folders: /mnt/mfs_tools, > /mnt/mfs_docs and /mnt/mfs_tmp and share those points using SMB. This > implies running 3 instances of mfsmount. It works around some > bottlenecks in mfsmount. > > Robert > > On 2/13/12 1:38 PM, JJ wrote: >> Perfect! >> >> Thank you. >> >> >> JJ >> Support Engineer >> Cirrhus9.com >> >> On 02/13/2012 01:38 PM, Travis Hein wrote: >>> ... >>> But functionally there is no technical consideration why you could > not >>> run mfsmount on the mfsmaster node. >>> ... > ------------------------------------------------------------------------ > ------ >> Try before you buy = See our experts in action! >> The most comprehensive online learning library for Microsoft > developers >> is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, > MVC3, >> Metro Style Apps, more. Free future releases when you subscribe now! >> http://p.sf.net/sfu/learndevnow-dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > ------------------------------------------------------------------------ > ------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Palak, G. (N. - PL/Wroclaw) <grz...@ns...> - 2012-02-17 09:44:09
|
@Robert What mfsmount bottlenecks you have on mind? Have you noticed any performance related bottlenecks? Greg -----Original Message----- From: ext Robert Sandilands [mailto:rsa...@ne...] Sent: Tuesday, February 14, 2012 12:56 AM To: moo...@li... Subject: Re: [Moosefs-users] Question re: win32 client/access I have to disagree. mfsmaster is very sensitive to load. You really want to run a dedicated master server. Setting up a (virtual) machine or even multiple (virtual) machines to mount the file system is a better idea. Ensure that you disable all asynchronous file access in Samba. FUSE and sendfile() is not a great combination. I would also recommend having a mount per SMB share. For example if you plan to have 3 shares called "tools", "docs" and "tmp" then I would mount MooseFS on three folders: /mnt/mfs_tools, /mnt/mfs_docs and /mnt/mfs_tmp and share those points using SMB. This implies running 3 instances of mfsmount. It works around some bottlenecks in mfsmount. Robert On 2/13/12 1:38 PM, JJ wrote: > Perfect! > > Thank you. > > > JJ > Support Engineer > Cirrhus9.com > > On 02/13/2012 01:38 PM, Travis Hein wrote: >> ... >> But functionally there is no technical consideration why you could not >> run mfsmount on the mfsmaster node. >> ... > ------------------------------------------------------------------------ ------ > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ------------------------------------------------------------------------ ------ Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Michał B. <mic...@ge...> - 2012-02-17 09:28:42
|
Hi! In this case chunkserver resets connection to a client. Probably it is caused by timeout which may occur at high network occupancy or high usage of HDDs. Firstly check network load. You can also send charts of chunkserver from cgi interface (last tab). The below message is from 192.168.1.93 (0xC0A8015D) machine. BTW. Does only this machine give this message or also others? Kind regards Michał Borychowski MooseFS Support Manager _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Gemius S.A. ul. Wołoska 7, 02-672 Warszawa Budynek MARS, klatka D Tel.: +4822 874-41-00 Fax : +4822 874-41-01 -----Original Message----- From: Sext [mailto:se...@aw...] Sent: Wednesday, February 15, 2012 7:27 PM To: moo...@li... Subject: [Moosefs-users] Question about ECONNRESET (Connection reset by peer) Hello Could you please tell me in which cases there is an error: Feb 15 18:09:25 mfsmount[2174]: readblock; tcpread error: ECONNRESET (Connection reset by peer) Feb 15 18:09:25 mfsmount[2174]: file: 316275, index: 0, chunk: 134391, version: 1, cs: C0A8015D:9422 - readblock error (try counter: 3) At rush hour, some of our servers are beginning to slow down and write to a log of these messages. What we need to check first? Could there be a problem with the network cards or a problem on the switch? We use the MFS for streaming video. All hard drives loaded with no more than 40%. On each Chunkserver idle ~ 50%. Kind regards, Vadik Sext ---------------------------------------------------------------------------- -- Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Quenten G. <QG...@on...> - 2012-02-16 04:01:43
|
Hi All I was wondering what you maybe using for your iSCSI Connectivity to your vmhosts, I've tried using a ubuntu enterprise iscsi target and it seems quite slow even after some minor tuning I'm able to get 20-25mb/s however from the mfs mount I am able to do a dd write of 70/80mb/s no worries. Regards, Quenten Grasso |
From: Ricardo J. B. <ric...@da...> - 2012-02-15 20:32:37
|
El Miércoles 15/02/2012, matakura escribió: AFAIK, the only kind of ACL is IP-based, configured in mfsexports.cfg. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: matakura <mat...@gm...> - 2012-02-15 19:16:58
|
-- matakura *"O conhecimento sólido é adquirido dia após dia, com tenacidade, obstinação, contundência, dedicação, concentração, sem perder o foco e com muita calma. Nada de afobar para apreender tudo de uma só vez. Lembrar sempre de que como a tartaruga venceu a corrida? Porque não desviou sua atenção para algo fora do seu objetivo.Este é o segredo; focalizar sempre o seu principal objetivo e não o perdê de vista.*" Autor: Soldado de peso |
From: Sext <se...@aw...> - 2012-02-15 18:58:10
|
Hello Could you please tell me in which cases there is an error: Feb 15 18:09:25 mfsmount[2174]: readblock; tcpread error: ECONNRESET (Connection reset by peer) Feb 15 18:09:25 mfsmount[2174]: file: 316275, index: 0, chunk: 134391, version: 1, cs: C0A8015D:9422 - readblock error (try counter: 3) At rush hour, some of our servers are beginning to slow down and write to a log of these messages. What we need to check first? Could there be a problem with the network cards or a problem on the switch? We use the MFS for streaming video. All hard drives loaded with no more than 40%. On each Chunkserver idle ~ 50%. Kind regards, Vadik Sext |
From: Ricardo J. B. <ric...@da...> - 2012-02-15 17:43:56
|
Hi, Today one of our mfsmasters crashed and had to be rebooted. While the master server was hung, the metalogger tried to re-connect but it segfaulted: Feb 15 13:46:15 bkpmds02 mfsmetalogger[1992]: connecting ... Feb 15 13:46:39 bkpmds02 mfsmetalogger[1992]: connection failed, error: EHOSTUNREACH (No route to host) Feb 15 13:46:40 bkpmds02 mfsmetalogger[1992]: connecting ... Feb 15 13:46:41 bkpmds02 kernel: mfsmetalogger[1992]: segfault at 0000000000000060 rip 0000003536c612ed rsp 00007ffff2247a10 error 4 At that time, from the master logs you can see it was hung: Feb 15 13:45:00 bkpmds01 mfsmaster[2437]: total: usedspace: 30279830740992 (28200.29 GiB), totalspace: 32242667642880 (30028.32 GiB), usage: 93.91% Feb 15 13:49:36 bkpmds01 syslogd 1.4.1: restart. Feb 15 13:49:36 bkpmds01 kernel: klogd 1.4.1, log source = /proc/kmsg started. Feb 15 13:49:36 bkpmds01 kernel: Linux version 2.6.18-274.17.1.el5 (moc...@bu...) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Tue Jan 10 17:25:58 EST 2012 Both servers have their time synchronized via ntp. both are CentOS 5.7 64 bits with mfs 1.6.20 installed from RepoForge.org. I'm still investigating the cause of the master hang, as nothing gets logged (I'm thinking RAM or CPU problems) but I'm reporting the segfault in case it can be debugged. Regards, -- Ricardo J. Barberis Senior SysAdmin / ITI Dattatec.com :: Soluciones de Web Hosting Tu Hosting hecho Simple! ------------------------------------------ |
From: Quenten G. <QG...@on...> - 2012-02-15 00:48:28
|
Hi Everyone, I've been asked to compare GlusterFS with MooseFS, please feel free to comment if you agree or have more info to add! So far some of the current advantages of MooseFS are, 1) Block Based Storage with Object Type Storage resilience - What I mean by this is as your not using Raid Controllers this makes URE's a very minor issue. 2) Capacity is dynamically expandable/distributed when a new chunk server is added 3) Deleted files have a "trash bin" 4) Snapshots! 5) If I have 36 disks I gain a benefit from all 36 disks instead of a just 2 x mirror or 2 x server stripe (in gluster 3.3beta) 6) File/Folder Level replication "Goal" options Benefits of GlusterFS so far are, 1) Distributed hash tables (no metadata server) I know this is being worked on with HA metadata servers 2) Paid support channels via red hat (paid support this is a big benefit for our business) as we are looking to use this as our primary storage platform. Also does MooseFS have an option for automatic snapshots eg: form of backups? And does MooseFS have a offsite Replication Feature to another MooseFS installation e.g.: 2nd DC via snapshots or otherwise? Does MooseFS use Granular locking if we were running vm's are we had a server failure wile its replicating will everything continue to function? Regards, Quenten Grasso |
From: Robert S. <rsa...@ne...> - 2012-02-15 00:29:32
|
man smb.conf Search for sendfile. Robert On 2/14/12 12:14 PM, Steve wrote: > Hi, > > > > Is async enabled by default ? What do I look for ? > > > > I mainly use one share but windows desktop drops out periodically seems more > often with> 4 torrent downloads at unthrottled adsl bandwidths. I did think > this was ipcop as a reboot fixes but now I think back unlikely because ive > moved from ipcop1.4 to ipcop2 on different hardware and same problem exists. > ipcop is linking desktop to mfs hardware subnet. > > > > Steve > > > > > > > > > > -------Original Message------- > > > > From: Robert Sandilands > > Date: 14/02/2012 00:13:42 > > To: moo...@li... > > Subject: Re: [Moosefs-users] Question re: win32 client/access > > > > I have to disagree. > > > > mfsmaster is very sensitive to load. You really want to run a dedicated > > master server. Setting up a (virtual) machine or even multiple (virtual) > > machines to mount the file system is a better idea. > > > > Ensure that you disable all asynchronous file access in Samba. FUSE and > > sendfile() is not a great combination. > > > > I would also recommend having a mount per SMB share. > > > > For example if you plan to have 3 shares called "tools", "docs" and > > "tmp" then I would mount MooseFS on three folders: /mnt/mfs_tools, > > /mnt/mfs_docs and /mnt/mfs_tmp and share those points using SMB. This > > implies running 3 instances of mfsmount. It works around some > > bottlenecks in mfsmount. > > > > Robert > > > > On 2/13/12 1:38 PM, JJ wrote: > >> Perfect! >> Thank you. >> JJ >> Support Engineer >> Cirrhus9.com >> On 02/13/2012 01:38 PM, Travis Hein wrote: >>> ... >>> But functionally there is no technical consideration why you could not >>> run mfsmount on the mfsmaster node. >>> ... > ----------------------------------------------------------------------------- > > >> Try before you buy = See our experts in action! >> The most comprehensive online learning library for Microsoft developers >> is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, >> Metro Style Apps, more. Free future releases when you subscribe now! >> http://p.sf.net/sfu/learndevnow-dev2 >> _______________________________________________ >> moosefs-users mailing list >> moo...@li... >> https://lists.sourceforge.net/lists/listinfo/moosefs-users > > > > > ----------------------------------------------------------------------------- > > > Keep Your Developer Skills Current with LearnDevNow! > > The most comprehensive online learning library for Microsoft developers > > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > > Metro Style Apps, more. Free future releases when you subscribe now! > > http://p.sf.net/sfu/learndevnow-d2d > > _______________________________________________ > > moosefs-users mailing list > > moo...@li... > > https://lists.sourceforge.net/lists/listinfo/moosefs-users > > |
From: Steve <st...@bo...> - 2012-02-14 17:15:11
|
Hi, Is async enabled by default ? What do I look for ? I mainly use one share but windows desktop drops out periodically seems more often with > 4 torrent downloads at unthrottled adsl bandwidths. I did think this was ipcop as a reboot fixes but now I think back unlikely because ive moved from ipcop1.4 to ipcop2 on different hardware and same problem exists. ipcop is linking desktop to mfs hardware subnet. Steve -------Original Message------- From: Robert Sandilands Date: 14/02/2012 00:13:42 To: moo...@li... Subject: Re: [Moosefs-users] Question re: win32 client/access I have to disagree. mfsmaster is very sensitive to load. You really want to run a dedicated master server. Setting up a (virtual) machine or even multiple (virtual) machines to mount the file system is a better idea. Ensure that you disable all asynchronous file access in Samba. FUSE and sendfile() is not a great combination. I would also recommend having a mount per SMB share. For example if you plan to have 3 shares called "tools", "docs" and "tmp" then I would mount MooseFS on three folders: /mnt/mfs_tools, /mnt/mfs_docs and /mnt/mfs_tmp and share those points using SMB. This implies running 3 instances of mfsmount. It works around some bottlenecks in mfsmount. Robert On 2/13/12 1:38 PM, JJ wrote: > Perfect! > > Thank you. > > > JJ > Support Engineer > Cirrhus9.com > > On 02/13/2012 01:38 PM, Travis Hein wrote: >> ... >> But functionally there is no technical consideration why you could not >> run mfsmount on the mfsmaster node. >> ... > ----------------------------------------------------------------------------- > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users ----------------------------------------------------------------------------- Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d _______________________________________________ moosefs-users mailing list moo...@li... https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Robert S. <rsa...@ne...> - 2012-02-14 00:11:55
|
I have to disagree. mfsmaster is very sensitive to load. You really want to run a dedicated master server. Setting up a (virtual) machine or even multiple (virtual) machines to mount the file system is a better idea. Ensure that you disable all asynchronous file access in Samba. FUSE and sendfile() is not a great combination. I would also recommend having a mount per SMB share. For example if you plan to have 3 shares called "tools", "docs" and "tmp" then I would mount MooseFS on three folders: /mnt/mfs_tools, /mnt/mfs_docs and /mnt/mfs_tmp and share those points using SMB. This implies running 3 instances of mfsmount. It works around some bottlenecks in mfsmount. Robert On 2/13/12 1:38 PM, JJ wrote: > Perfect! > > Thank you. > > > JJ > Support Engineer > Cirrhus9.com > > On 02/13/2012 01:38 PM, Travis Hein wrote: >> ... >> But functionally there is no technical consideration why you could not >> run mfsmount on the mfsmaster node. >> ... > ------------------------------------------------------------------------------ > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: JJ <jj...@ci...> - 2012-02-13 22:44:30
|
Perfect. Thanks! JJ Support Engineer Cirrhus9.com On 02/13/2012 05:22 PM, Travis Hein wrote: > try adding the configure parameter "--enable-mfsmount" |
From: Travis H. <tra...@tr...> - 2012-02-13 22:22:14
|
try adding the configure parameter "--enable-mfsmount" this will force the system prerequisite libraries to be present, otherwise the configure script will fail with the message "require ..... for mfsmount" . The mfsmount uses the FUSE libraries and the zlibrary, and the system requires (in addition to the usual build tools like make, gcc), the developer packages of these libraries to be installed, so the headers are available. By default the configure script will just decide to not bother trying to build the mfsmount client if one or more of these libraries / headers are not available, and continue to build the rest of it. e.g. usually on my CentOS machines, I install these fuse-devel packages with yum install fuse-devel zlib-devel and then (in your case) ./configure --prefix=/usr --sysconfdir=/etc/mfs --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --enable-mfsmount On 12-02-13 4:57 PM, JJ wrote: > What would be the correct ./configure options|command to install > mfsmount on a working mfsmaster? > > I tried ./configure --prefix=/usr --sysconfdir=/etc/mfs > --localstatedir=/var/lib --with-default-user=mfs > --with-default-group=mfs --disable-mfschunkserver&& make > > with a find looking for it. Nothing. > > Tried fewer options (--disable*), no joy. > > Thank you for your time. > > > JJ > Support Engineer > Cirrhus9.com > |
From: JJ <jj...@ci...> - 2012-02-13 22:01:19
|
What would be the correct ./configure options|command to install mfsmount on a working mfsmaster? I tried ./configure --prefix=/usr --sysconfdir=/etc/mfs --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver && make with a find looking for it. Nothing. Tried fewer options (--disable*), no joy. Thank you for your time. JJ Support Engineer Cirrhus9.com On 02/13/2012 01:38 PM, Travis Hein wrote: > You would need to consider your network topology (e.g. in our > environment we have the moosefs master and chunk servers and mfs mount > clients in their own back end network segment, and a virtual machine > that runs mfsmount and samba is the access point or "gateway" into the > file system for the windows users. We tried to follow the design as one > would do an iSCSI san, where dedicated "SAN" network segments are used > for the back end storage to the application servers that is different > from the front end facing network segments the end users and clients > invoke the applications with. > > You might also consider the amount of network IO operations that would > be done between the windows machines and the (SCP operations?) mount > point that exposes the mfsmount-ed file system. Where by depending on > the speed of the network (100MB, GB ethernet) and the amount of > concurent requests, you might find contention for the network link and > this could reduce the performance of the file system as a whole. but > this is entirely subjective . > > But functionally there is no technical consideration why you could not > run mfsmount on the mfsmaster node. > > On 12-02-13 1:17 PM, JJ wrote: >> Now that our moosefs install is functional, >> We want to provide access to our Clients that are bound >> to a Window-based OS. >> >> I would like to know if mounting (using mfsmount, running ./configure >> per the client installl) on the mfsmaster is a good idea? >> >> If it suggested ok to do, then I can ask our Windows users to install >> WinSCP and then I'd alter the mfs user's $home directory to be /mnt/mfs >> > > > ------------------------------------------------------------------------------ > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: Travis H. <tra...@tr...> - 2012-02-13 20:09:08
|
I would say to just use the default file system that your OS or distro wants to use, or the one that you are comfortable or have standardized using in your environment. e.g. the advanced features of btrfs would not be directly benificial to mfsmounted clients. most likely the latency and performance of your network will be the major influence for the performance of the file system as a whole, in that the file system to use for chunkservers would be similar discussion of if we should use faster spindle speed disks. the design feature of a chunkserver node was to be a commodity and disposable piece, allowing for expansion or replacement over time. in our environment we have different versions of CentOS for the chunkserver nodes, depending on when they were turned up really. So they have different ext3, ext4 filesystems on the OS. We have not been able to notice any kind of difference between their performance (or have bothered to come up wit a performance test really) . On 12-02-13 2:35 PM, wkmail wrote: > We are currently using traditional, reliable Ext3 with no issues. > > It has been suggested that perhaps XFS would be faster/better or maybe > Ext4 or even btrfs. > > We have had some bad experiences with XFS in the past, but we assume the > XFS devs have improved that and/or the MooseFS goal setting architecture > makes an XFS blowout on a single chunker less of a problem (assuming > they don't all go at once). > > Any comments? > > -bill > > ------------------------------------------------------------------------------ > Try before you buy = See our experts in action! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-dev2 > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |
From: wkmail <wk...@bn...> - 2012-02-13 19:35:31
|
We are currently using traditional, reliable Ext3 with no issues. It has been suggested that perhaps XFS would be faster/better or maybe Ext4 or even btrfs. We have had some bad experiences with XFS in the past, but we assume the XFS devs have improved that and/or the MooseFS goal setting architecture makes an XFS blowout on a single chunker less of a problem (assuming they don't all go at once). Any comments? -bill |
From: JJ <jj...@ci...> - 2012-02-13 18:42:04
|
Perfect! Thank you. JJ Support Engineer Cirrhus9.com On 02/13/2012 01:38 PM, Travis Hein wrote: > ... > But functionally there is no technical consideration why you could not > run mfsmount on the mfsmaster node. > ... |
From: Travis H. <tra...@tr...> - 2012-02-13 18:38:23
|
You would need to consider your network topology (e.g. in our environment we have the moosefs master and chunk servers and mfs mount clients in their own back end network segment, and a virtual machine that runs mfsmount and samba is the access point or "gateway" into the file system for the windows users. We tried to follow the design as one would do an iSCSI san, where dedicated "SAN" network segments are used for the back end storage to the application servers that is different from the front end facing network segments the end users and clients invoke the applications with. You might also consider the amount of network IO operations that would be done between the windows machines and the (SCP operations?) mount point that exposes the mfsmount-ed file system. Where by depending on the speed of the network (100MB, GB ethernet) and the amount of concurent requests, you might find contention for the network link and this could reduce the performance of the file system as a whole. but this is entirely subjective . But functionally there is no technical consideration why you could not run mfsmount on the mfsmaster node. On 12-02-13 1:17 PM, JJ wrote: > Now that our moosefs install is functional, > We want to provide access to our Clients that are bound > to a Window-based OS. > > I would like to know if mounting (using mfsmount, running ./configure > per the client installl) on the mfsmaster is a good idea? > > If it suggested ok to do, then I can ask our Windows users to install > WinSCP and then I'd alter the mfs user's $home directory to be /mnt/mfs > |
From: JJ <jj...@ci...> - 2012-02-13 18:20:53
|
Now that our moosefs install is functional, We want to provide access to our Clients that are bound to a Window-based OS. I would like to know if mounting (using mfsmount, running ./configure per the client installl) on the mfsmaster is a good idea? If it suggested ok to do, then I can ask our Windows users to install WinSCP and then I'd alter the mfs user's $home directory to be /mnt/mfs -- JJ Support Engineer Cirrhus9.com |