From: Travis <tra...@tr...> - 2010-01-15 13:18:32
|
Im running moosefs on my notebook, the latest in the git repository MFS version 1.6.11 FUSE library version: 2.7.4 I have set up a 'tun' interface and given that an ip address of 10.0.0.1, and have the msfmaster and chunkserver attach to this. The motivation being I have both wireless and wired network interfaces on the notebook and don't want the service to have any kind of dependency on on which ever one is currently active, but also, because I move between different places, different IP addresses on the LAN too. the mfsmaster and one mfschunkserver process run on this 10.0.0.1 address and then the mfsmount command in use is mfsmount -H 10.0.0.1 /mnt/mfs But it appears the mfsmount is choosing one of the other interfaces to connect 'from', such as eth0, or wlan0, and connecting from (what ever IP address is on that LAN at the time). I noticed this as I had to enter the 192.168.x.x address into the mfsexports.cfg in addition to the 10.0.0.x one, which I added to allow it to mount for now. The main issue is usually after suspending the notebook and coming back online (in a different network) the mfs mount seems to get lost ls /mnt/mfs ls: cannot access /mnt/mfs: Input/output error So is there a way to make the mfsmount work on a specified interface that it attempts to use when connecting to its master and chunk servers? Not that most people would run a complete self contained stack on the notebook right. My motivation is to have the services on the host and to have the mfsmount client inside one or more virtual machines, that also work on this tun interface, so as to create a shared file system for all the virtual machines. and in my road warrior setup the everything on one host works well. But I wonder if it could come up in complex clusters or other multi-homed machines where a high performance network and a low performance management network were both available and the networks could some how route between each other, that could cause the data traffic from the mount to end up going over the lower performance management network or something . |