From: Marc S. <mar...@mc...> - 2012-03-25 02:41:15
|
Yeah, I have read about that, but I thought that was only relevant with multiple sessions? In my simple test, its one FC initiator to one target HBA. Or maybe io_grouping_type does come into play with only one session and my setting is giving me poor performance with mixed random read/writes only. In my test its set to 'auto' which as I understand works well with iSCSI since it groups by initiator name, but with FC each initiator name is a unique WWN. --Marc On Sat, Mar 24, 2012 at 10:26 PM, Marcus Sorensen <sha...@gm...> wrote: > I don't really know the details, but I know that SCST has an io > grouping policy feature that utilizes CFQ. Perhaps you already know > this. It gives you the option of forcing a particular target or set of > targets to use the same CFQ queue (or different queues). So it's > possible that your local test's IO is being 'binned' differently than > the SCST test. > > On Sat, Mar 24, 2012 at 8:19 PM, Marc Smith <mar...@mc...> wrote: >> Oh yeah, that is definitely it. Using 'deadline' or 'noop' puts the >> mixed IO performance numbers right up there in line with the >> pass-through handler performance. >> >> I checked doing only random reads, and only random writes tests using >> fio again with deadline scheduler and those numbers stayed good too. >> >> This is kinda interesting, and I'll say this again, when I run mixed >> IO random read/write test using fio on the local server (eg, not >> "through" SCST), my performance numbers are fine with the storage >> (with CFQ scheduler). >> >> >> So, is it safe to say CFQ is no good with BLOCKIO in all cases? Can >> anyone else confirm these performance differences between the Linux >> I/O schedulers? >> >> >> --Marc >> >> On Sat, Mar 24, 2012 at 11:04 AM, Marcus Sorensen <sha...@gm...> wrote: >>> It's definitely worth changing the scheduler and retrying. It may have >>> to do with the io grouping that SCST does with CFQ. I for one don't >>> trust CFQ, even though you get the io grouping feature, as I can still >>> reproduce a scenario where I can crash a box by starving writes with >>> reads on CFQ. >>> >>> On Sat, Mar 24, 2012 at 8:49 AM, Marc Smith <mar...@mc...> wrote: >>>> On Sat, Mar 24, 2012 at 3:53 AM, Bart Van Assche <bva...@ac...> wrote: >>>>> On 03/24/12 02:10, Marc Smith wrote: >>>>>> On Fri, Mar 23, 2012 at 3:27 PM, Bart Van Assche <bva...@ac...> wrote: >>>>>>> On 03/19/12 17:50, Marc Smith wrote: >>>>>>> >>>>>>>> In SCST I switched the device handler to dev_disk (pass-through) >>>>>>>> instead of using BLOCKIO mode and now I see near local IOPS from the >>>>>>>> initiator side: >>>>>>>> >>>>>>>> fio --bs=4k --direct=1 --rw=randrw --ioengine=libaio --iodepth=64 >>>>>>>> --rwmixread=50 --rwmixwrite=50 --name=/dev/sdk >>>>>>>> [11.7K/11.8K iops] >>>>>>>> >>>>>>>> >>>>>>>> Should I be expecting this when using BLOCKIO mode? Is pass-through >>>>>>>> mode better for mixed random read/write loads? >>>>>>> >>>>>>> Performance of BLOCKIO and pass-through should be similar as far as I >>>>>>> know. This might be a block driver issue. What are the names of the >>>>>>> Linux drivers of the RAID controller and the SSDs in your setup ? >>>>>> root@apricot ~ # uname -a >>>>>> Linux apricot 3.2.6-esos.prod #1 SMP Mon Mar 12 02:41:30 EDT 2012 >>>>>> x86_64 GNU/Linux >>>>>> >>>>>> root@apricot ~ # cat /sys/module/megaraid_sas/version >>>>>> 00.00.06.12-rc1 >>>>>> >>>>>> So, using 'megaraid_sas' -- the actual controller is a "LSI MegaRAID >>>>>> SAS 9280-24i4e" (FW Package 12.12.0-0047). >>>>>> The SATA SSDs are connected to SGPIO backplane (24 slot) -> MegaRAID >>>>>> SAS controller. >>>>> >>>>> So that's a SCSI driver. As far as I can see for SCSI drivers BLOCKIO >>>>> submits I/O requests to the Linux I/O scheduler while for SCSI >>>>> pass-through the Linux I/O scheduler is bypassed. Could that explain the >>>>> performance difference observed on your setup ? >>>>> >>>> >>>> I checked and I am using CFQ for the back-end block device: >>>> root@apricot ~ # cat /sys/block/sdb/queue/scheduler >>>> noop deadline [cfq] >>>> >>>> When I am running fio on the local block device (on the SCST storage >>>> server itself), I am using '/dev/sdb' so that is going through the >>>> Linux I/O scheduler too, right? Yet, those mixed IO performance >>>> numbers are "normal" (good). >>>> >>>> Do you think its worth trying a different I/O scheduler (eg, deadline) >>>> for the back-end block device in conjunction with BLOCKIO handler? >>>> >>>> >>>> Now I'm really curious if our current SCST storage servers that are in >>>> production have this mixed IO performance issue as we are using >>>> BLOCKIO for those disks. I did some initial performance tests when I >>>> set those up, but I don't think I actually did any of the random r/w >>>> mixed IO tests using fio -- only random read, or random write by >>>> themselves. >>>> >>>> I have a development box that matches (Linux kernel, SCST version, >>>> etc.) what we are running in production right now. I'll do the mixed >>>> IO test using FIO and let you know what I find out. I guess what I'm >>>> hoping to find out is if this mixed IO performance loss is something >>>> new, or if it has always been there. >>>> >>>> >>>> --Marc >>>> >>>> >>>>> Bart. >>>> >>>> ------------------------------------------------------------------------------ >>>> This SF email is sponsosred by: >>>> Try Windows Azure free for 90 days Click Here >>>> http://p.sf.net/sfu/sfd2d-msazure >>>> _______________________________________________ >>>> Scst-devel mailing list >>>> https://lists.sourceforge.net/lists/listinfo/scst-devel |