Yucong Sun (叶雨飞) [mailto:sunyucong@...] wrote:
> Thanks, I'm already using deadline, noop don't seems to make
> a difference.
> Can you help me on that Wthreads/QueueCommands question?
> On Wed, Jan 11, 2012 at 9:29 AM, Ross S. W. Walker
> <RWalker@...> wrote:
> > Tim Fletcher [mailto:tim@...] wrote:
> >> On Wed, 2012-01-11 at 07:54 -0800, Yucong Sun (叶雨飞) wrote:
> >> > Hi Ross,
> >> >
> >> > The storage is a 4-disk raid10, I tested various WThreads +
> >> > QueuedCommand combination, the best I could tell is that If WThreads *
> >> > QueuedCommand > physical queue_depth(64 in my case) too much, the
> >> > latency raises greatly, I'm now running 4 * 16 , but any more
> >> > explanation is much appreciated.
Well I'm not sure anyone has done an examination to the relation of
WThreads to QueuedCommands, but I can explain how the code handles
these and we can come up with an educated hypothesis.
This assumes your running the default threads per-target and not
a global thread pool.
Each target has X worker threads, as each request comes in it is
added to the target's request queue which then wakes up the
target's threads which then pull the next available command off
the queue until it is depleted.
The QueuedCommands (default 32) are used in setting the initial
MaxCmdSN which is the initial CmdSN + QueuedCommands.
We haven't implemented a sliding command window yet, so MaxCmdSN
is always CmdSN + QC which means IET will always have WT count
in the barrel and QC - WT in queue (on each target of course)
when fully loaded. It doesn't factor in the requests on queue in
the underlying storage which could be problematic.
You could try setting the nr_requests on the underlying storage
to the # of independant platters, so for a 4 disk raid10, say 2
(1 on each disk, 2 for each raid1, 2 for the raid0) and control
the actual queue depth with QueuedCommands.
I might also run iostat during the workload to get an idea of
each disks queue depth and the number of IO operations being
performed. If these are 7200 RPM drives and there is 1600 random
IOPS hitting them, then of course it won't handle it cause this
setup can handle at most 180 random IOPS. If you notice that a
particular drive's queue depth is high while the other's are
zero or 1, then you might have a faulty piece of hardware
(drive, controller, cable, etc), this is only true if the
disks' queue depths are high enough to see a problem.
Let me know what your tests reveal.
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.