|
From: Göran B. <goe...@gm...> - 2019-06-08 12:17:34
|
Am 08.06.2019 um 04:01 schrieb Bart Van Assche: > On 6/6/19 9:47 PM, Göran Bruns wrote: >> make: *** [Makefile:174: Module.symvers] Error 1 > > The changes checked in as trunk r8419 should fix this error. Build revision 8420 ... no issues ;) > >> Any ideas according the large files issue ? > > I'm not sure. What performance does fio report when you run it on the > SCST system locally (fio --direct=1 --rw=read --ioengine=libaio ...)? > > Performance with non-buffered I/O is very bad. It results in vast amount of transfers per seconds (see iostat). fio --direct=1 --rw=read --ioengine=libaio --name=iscsi --filename=/dev/raid/fio-test --size=5G iscsi: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 fio-2.15 Starting 1 process Jobs: 1 (f=1): [R(1)] [100.0% done] [29368KB/0KB/0KB /s] [7342/0/0 iops] [eta 00m:00s] iscsi: (groupid=0, jobs=1): err= 0: pid=6181: Sat Jun 8 13:19:58 2019 read : io=5120.0MB, bw=29023KB/s, iops=7255, runt=180645msec slat (usec): min=4, max=180, avg= 7.69, stdev= 2.16 clat (usec): min=64, max=53289, avg=128.59, stdev=266.81 lat (usec): min=87, max=53301, avg=136.28, stdev=266.87 clat percentiles (usec): | 1.00th=[ 110], 5.00th=[ 112], 10.00th=[ 113], 20.00th=[ 114], | 30.00th=[ 115], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 118], | 70.00th=[ 119], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 137], | 99.00th=[ 179], 99.50th=[ 213], 99.90th=[ 3568], 99.95th=[ 7776], | 99.99th=[11200] lat (usec) : 100=0.08%, 250=99.51%, 500=0.27%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.03%, 10=0.07%, 20=0.02%, 50=0.01% lat (msec) : 100=0.01% cpu : usr=2.51%, sys=7.82%, ctx=1310895, majf=0, minf=12 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1310720/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=5120.0MB, aggrb=29023KB/s, minb=29023KB/s, maxb=29023KB/s, mint=180645msec, maxt=180645msec as seen in iostat avg-cpu: %user %nice %system %iowait %steal %idle 1.00 0.00 25.63 0.00 0.00 73.37 Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 2418.20 9.45 0.00 47 0 sdb 2418.80 9.45 0.00 47 0 sdc 2432.00 9.50 0.00 47 0 md1 7269.00 28.39 0.00 141 0 dm-2 7269.00 28.39 0.00 141 0 dm-9 7269.00 28.39 0.00 141 0 write performance is even worse: fio --direct=1 --rw=write --ioengine=libaio --name=iscsi --filename=/dev/raid/fio-test --size=5G iscsi: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 fio-2.15 Starting 1 process Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/11140KB/0KB /s] [0/2785/0 iops] [eta 00m:00s] iscsi: (groupid=0, jobs=1): err= 0: pid=6629: Sat Jun 8 13:37:08 2019 write: io=5120.0MB, bw=11309KB/s, iops=2827, runt=463585msec slat (usec): min=2, max=301, avg= 5.60, stdev= 2.65 clat (usec): min=2, max=91358, avg=346.36, stdev=1801.54 lat (usec): min=85, max=91383, avg=351.96, stdev=1801.56 clat percentiles (usec): | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 145], 20.00th=[ 149], | 30.00th=[ 159], 40.00th=[ 171], 50.00th=[ 183], 60.00th=[ 187], | 70.00th=[ 197], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 298], | 99.00th=[ 1416], 99.50th=[15680], 99.90th=[27776], 99.95th=[31360], | 99.99th=[40704] lat (usec) : 4=0.01%, 100=0.01%, 250=91.49%, 500=6.00%, 750=0.53% lat (usec) : 1000=0.50% lat (msec) : 2=0.55%, 4=0.12%, 10=0.17%, 20=0.29%, 50=0.34% lat (msec) : 100=0.01% cpu : usr=1.13%, sys=2.36%, ctx=1311652, majf=0, minf=12 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=0/w=1310720/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: io=5120.0MB, aggrb=11309KB/s, minb=11309KB/s, maxb=11309KB/s, mint=463585msec, maxt=463585msec avg-cpu: %user %nice %system %iowait %steal %idle 0.61 0.00 14.44 0.00 0.00 84.95 Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 2310.60 1.83 7.20 9 36 sdb 2336.20 1.80 7.33 9 36 sdc 2336.00 1.80 7.32 9 36 md1 2796.80 0.00 10.93 0 54 dm-2 2796.80 0.00 10.93 0 54 dm-9 2796.80 0.00 10.93 0 54 As soon as buffered I/O is used performance goes up and transfers per second goes down: fio --rw=read --ioengine=libaio --name=iscsi --filename=/dev/raid/fio-test --size=5G iscsi: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 fio-2.15 Starting 1 process Jobs: 1 (f=1): [R(1)] [100.0% done] [140.6MB/0KB/0KB /s] [35.1K/0/0 iops] [eta 00m:00s] iscsi: (groupid=0, jobs=1): err= 0: pid=6889: Sat Jun 8 13:38:42 2019 read : io=5120.0MB, bw=145502KB/s, iops=36375, runt= 36033msec slat (usec): min=0, max=30574, avg=26.53, stdev=590.41 clat (usec): min=0, max=12871, avg= 0.50, stdev=31.70 lat (usec): min=0, max=30582, avg=27.03, stdev=591.49 clat percentiles (usec): | 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0], | 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0], | 70.00th=[ 0], 80.00th=[ 1], 90.00th=[ 1], 95.00th=[ 1], | 99.00th=[ 1], 99.50th=[ 1], 99.90th=[ 5], 99.95th=[ 6], | 99.99th=[ 20] lat (usec) : 2=99.69%, 4=0.06%, 10=0.23%, 20=0.01%, 50=0.01% lat (usec) : 100=0.01% lat (msec) : 4=0.01%, 10=0.01%, 20=0.01% cpu : usr=2.46%, sys=5.13%, ctx=3994, majf=0, minf=10 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1310720/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: io=5120.0MB, aggrb=145502KB/s, minb=145502KB/s, maxb=145502KB/s, mint=36033msec, maxt=36033msec avg-cpu: %user %nice %system %iowait %steal %idle 1.70 0.00 66.97 18.16 0.00 13.17 Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 91.42 45.91 0.00 230 0 sdb 88.82 46.21 0.00 231 0 sdc 74.25 46.11 0.00 231 0 md1 549.30 248.90 0.00 1247 0 dm-2 274.65 137.33 0.00 688 0 dm-9 274.65 137.33 0.00 688 0 fio --rw=write --ioengine=libaio --name=iscsi --filename=/dev/raid/fio-test --size=5G iscsi: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 fio-2.15 Starting 1 process Jobs: 1 (f=1): [f(1)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 00m:00s] iscsi: (groupid=0, jobs=1): err= 0: pid=6903: Sat Jun 8 13:40:21 2019 write: io=5120.0MB, bw=173295KB/s, iops=43323, runt= 30254msec slat (usec): min=1, max=124914, avg=20.59, stdev=609.68 clat (usec): min=0, max=16346, avg= 1.32, stdev=63.08 lat (usec): min=1, max=124918, avg=21.91, stdev=613.77 clat percentiles (usec): | 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0], | 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 1], | 70.00th=[ 1], 80.00th=[ 1], 90.00th=[ 1], 95.00th=[ 1], | 99.00th=[ 2], 99.50th=[ 3], 99.90th=[ 19], 99.95th=[ 253], | 99.99th=[ 2960] lat (usec) : 2=97.67%, 4=2.04%, 10=0.15%, 20=0.04%, 50=0.01% lat (usec) : 100=0.01%, 250=0.01%, 500=0.06%, 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01% cpu : usr=3.88%, sys=8.30%, ctx=117406, majf=0, minf=11 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=0/w=1310720/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: io=5120.0MB, aggrb=173295KB/s, minb=173295KB/s, maxb=173295KB/s, mint=30254msec, maxt=30254msec avg-cpu: %user %nice %system %iowait %steal %idle 1.62 0.00 81.74 10.55 0.00 6.09 Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn sda 139.20 0.01 70.04 0 350 sdb 121.20 0.03 69.20 0 345 sdc 116.60 0.03 69.43 0 347 md1 35186.00 0.00 137.45 0 687 dm-2 49526.00 0.00 193.46 0 967 dm-9 49511.60 0.00 193.40 0 967 But how could this be related to disk encryption ... as I a mentioned earlier, I had no performance issues with this setup before using dm-crypt. Göran |