From: Latchesar I. <lu...@io...> - 2005-11-25 22:27:45
|
On Fri, Nov 25, 2005 at 11:51:28AM -0600, Eric Van Hensbergen said: > > > > diff --git a/fs/9p/mux.c b/fs/9p/mux.c > > --- a/fs/9p/mux.c > > +++ b/fs/9p/mux.c > > @@ -4,7 +4,7 @@ > > > > > > /** > > - * xread - force read of a certain number of bytes > > - * @v9ses: session info structure > > - * @ptr: pointer to buffer > > - * @sz: number of bytes to read > > + * v9fs_mux_calc_poll_procs - calculates the number of polling procs > > + * based on the number of mounted v9fs filesystems. > > * > > - * Chuck Cranor CS-533 project1 > > + * The current implementation returns sqrt of the number of mounts. > > */ > > +inline int > > +v9fs_mux_calc_poll_procs(int muxnum) > > +{ > > + int n; > > + > > + if (v9fs_mux_poll_task_num) > > + n = muxnum/v9fs_mux_poll_task_num + > > + (muxnum%v9fs_mux_poll_task_num?1:0); > > + else > > + n = 1; > > + > > + if (n > NELEM(v9fs_mux_poll_tasks)) > > + n = NELEM(v9fs_mux_poll_tasks); > > > > Still strikes me that we should cap the number of proll procs by the > number of CPUs the system has. > The problem is that the polling proc is called every time when data receives (or is ready to be sent) on a transport. the proc iterates through all the muxes and calls poll on every transport and that is too expensive. The idea of having multiple procs is to keep both the number of procs and the number of transports every proc iterates through as low as possible. I am not sure the current formula is best for the comon case. I think we have to experiment with different formulas -- that's the reason I created a special function. > > +static int > > +v9fs_poll_proc(void *a) > .... > > + > > + dprintk(DEBUG_MUX, "sleeping...\n"); > > + schedule_timeout(10*HZ); > > + } > > Perhaps that should be something more dynamic? Set by a mount time or > module initialization option. If not a dynamic option then at the > very least a constant. I would just do schedule, but I hoped at some point to put some code that checks if a read or write works is blocked (which should be a bug) and doesn't allow the rest of the transports to read and write. I don't see any other reason to not block infinitely. > > > @@ -427,42 +437,25 @@ v9fs_create(struct inode *dir, > > > > v9fs_mistat2inode(fcall->params.rstat.stat, file_inode, sb); > > kfree(fcall); > > + fcall = NULL; > > + file_dentry->d_op = &v9fs_dentry_operations; > > > > This looks more like a bug you found and fixed than the mux re-write. Yes, it fixes a memory leak, I'll create a separate patch. > The only other quick question I have is on testing. It seems like > there are lots of changes to the locking/unlocking bits as part of the > mux re-write. Did you test on a multi-processor machine? If not, I > can run the code through 2-way and 8-way permutations to get a level > of confidence that we did all the SMP locking stuff right. In truth, > I'm not sure I did that previously, but I think it might be a good > idea. Yes, it would be very helpful if you can test it on SMP. I don't have access to any. :( We have to do some more testing, may be create a test plan. I'll send a patch that fixes most of the comments. Thanks, Lucho |