* nvme performance using old blk vs blk-mq, single thread
@ 2018-01-04 17:13 Alex Nln
0 siblings, 0 replies; only message in thread
From: Alex Nln @ 2018-01-04 17:13 UTC (permalink / raw)
Hello,
I am testing nvme devices and I found that there is about 15% degradation
in performance of a single threaded application when using blk-mq,
comparing to same application running on kernel with old blk layer.
I use fio-2.2.10, 4k block size, libaio, sequential reads, single thread.
Results for Intel DC P3600 400GB NVMe SSD, drive was natively formatted
to 512B and brought to the steady state before test.
# kernel version kIOPS
4.14.11-vanilla 163
3.19.0-vanilla 167
3.18.1-vanilla 196
3.16.0-34-generic 193
196K IOPS in 3.18.1 drops to 167K IOPS in 3.19.0
It looks like between 3.8.1 and 3.19.0 the major change related to nvme
was conversion of nvme to use blk-mq
Commit a4aea5623d4a54682b6ff5c18196d7802f3e478f
NVMe: Convert to blk-mq
Just to verify my results I did more tests on different server with different
NVMe disk HGST SN200 800GB
# kernel version kIOPS
4.10.0-42-generic 330
3.16.0-34-generic 375
I would like to ask if this is a known issue or there is something
wrong in my setup? There was a similar thread about the same issue while ago
in this list but there was no conclusion.
My fio file:
[global]
iodepth=128
direct=1
ioengine=libaio
group_reporting
time_based
runtime=60
filesize=32G
[job1]
rw=read
filename=/dev/nvme0n1p1
name=raw=sequential-read
numjobs=1
Thanks,
Alex
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2018-01-04 17:13 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-04 17:13 nvme performance using old blk vs blk-mq, single thread Alex Nln
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox