From: Aleksander W. <ale...@mo...> - 2016-12-06 13:14:14
|
Hi, First of all we have to precise if we are talking about files or directories. All operations on folders (metadata operations) are synchronized on the master level. Operations on files are more problematic. Especially in distributed file systems. If you like to write data from many clients to one file (this is really rare case) good idea is to think about using file locks. For example fcntl, lockf or flock (all of them are implemented in MooseFS, but be aware that flocks are implemented since fuse 2.9, so only newest kernels support it). Without locks there is no way to be sure in which order writes are performed internally - there is only guarantee that at the same time, only one client can write to one chunk of data (64MB). Each chunk modification also invalidates caches in other clients. Best regards Aleksander Wieliczko Technical Support Engineer MooseFS.com <moosefs.com> On 11/28/2016 11:15 AM, Winters.Hua wrote: > Dear MFS experts, > > I’m using the MFS 3.0.81(CentOS 6u3) for some test and I have one > questions about the concurrent read & write. > In the user manual, I remember that all read operations are > concurrent and write is also but if over on a same file fragment will > be sequential, controlled by MFS server. > > So, if I have several Clients, like Client A, Client B, there are > possible to upload same folders(folder M) to MFS Cache both from > Client A and Client B at same time. > I think, this write operation will happen on same file fragment and > which should be sequential and synchronized by MFS server, right ? > Also, at same time, there’s Client C and Client D, which > possibly want to read the same folder M at same time. > For this case, how the MFS works to make sure both read & write work > as expected ? Or Should I do some control in the Client side ? > > > Appreciate for your helps and thanks. > Regards, > Xinghua Gao |