From: Travis H. <tra...@tr...> - 2011-11-18 13:39:40
|
Although that sounds like a neat idea, in practice, this would probably be a very slow user experience. The first technical challenge is the connected TCP socket nature of the protocol. We would require up to (4) TCP port numbers to be open to the Internet, and possibly firewalled so that only the set of master server, chunk server and metadata servers could access these: * TCP port 9429 (MATOML) : Master server to meta logger servers. This port needs to be opened on the mfsmaster machine if we wish to have the metalogger machines connect. * TCP port 9420 (MATOCS) : Master server to chunk servers. This port needs to be opened on the mfsmaster machine for the chunk servers to connect. * TCP port 9421 (MATOCU) : Master server to clients. This port needs to be opened on the mfsmaster machine for the mfsmount clients to connect. * TCP port 9422 (CSSERV) : Chunk servers to clients. This port needs to be opened on the chunkserver machines for the mfsmount clients to connect. Additionally, the protocol kind of expects the chunk servers maintain a connection to the master server. The nature of the internet in general is not reliable enough to ensure this (e.g. not just always possible to keep it plugged in all the time everywhere like a local switch in a LAN environment Additionally, the MFS protocol does not have the concept of location or geographic awareness yet. so it would not be possible for it to know which was the closest chunkserver. So technical problems aside, I had found very slow and unuseable performance when trying to mount mfsmount over my local city internet even, between my office and my house. What I have found works better is to mfsmount client on a machine in the same network where the master and chunk servers are and then from remotely SSHFS connect to this. I would say if you want to use moosefs, then to have individual installations in each regional area, and then have some other front end file share or access mechanism, such as Samba, HTTP, FTP, SSHFS facing the internet. Where you would either need to have a mechanism to replicate and synchronize contents between data centers, or just adopt a convention that some content is only available from some mirror sites If you are really interested in having file system level multi site spread over large areas, perhaps have a look at the coda file system. it aims to be more of a distributed file system rather than a cluster file system, where by it Coda supports disconnected pieces and remote sites like you were looking for better. I am sure you could likely even run Coda on top of a moosefs installation. On 11-11-17 2:02 PM, hussein ismail wrote: > Hi, > > I would like to know, is it possible to set up global moosefs system, in > which chunkservers are spread globally (for example spread in europe and > china or other places)? and related question, when a client requests data, > will he receive from the closest chuckserver (for example if the client is > in china and there is a chunkserver in china, will receive the data from > that server)? > > Thank you very much > > Br, > Hussein > > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure > contains a definitive record of customers, application performance, > security threats, fraudulent activity, and more. Splunk takes this > data and makes sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-novd2d > > > _______________________________________________ > moosefs-users mailing list > moo...@li... > https://lists.sourceforge.net/lists/listinfo/moosefs-users |