From: John Garry <john.garry@huawei.com>
To: Kashyap Desai <kashyap.desai@broadcom.com>,
Ming Lei <ming.lei@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
Hannes Reinecke <hare@suse.com>,
"Martin K . Petersen" <martin.petersen@oracle.com>,
"James E.J. Bottomley" <jejb@linux.ibm.com>,
Christoph Hellwig <hch@lst.de>, <linux-scsi@vger.kernel.org>,
Sathya Prakash Veerichetty <sathya.prakash@broadcom.com>,
Sreekanth Reddy <sreekanth.reddy@broadcom.com>,
Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>,
PDL-MPT-FUSIONLINUX <mpt-fusionlinux.pdl@broadcom.com>,
chenxiang <chenxiang66@hisilicon.com>
Subject: Re: About scsi device queue depth
Date: Wed, 13 Jan 2021 12:17:47 +0000 [thread overview]
Message-ID: <16a66e96-a08f-78d1-155a-41bb5d31f2d1@huawei.com> (raw)
In-Reply-To: <7a342a60943cd7ed28d319b189c105ba@mail.gmail.com>
On 12/01/2021 11:44, Kashyap Desai wrote:
>>>> loops = 10000
>>> Is there any effect on random read IOPS when you decrease sdev queue
>>> depth? For sequential IO, IO merge can be enhanced by that way.
>>>
>> Let me check...
> John - Can you check your test with rq_affinity = 2 and nomerges=2 ?
>
> I have noticed similar drops (whatever you have reported) - "once we reach
> near pick of sdev or host queue depth, sometimes performance drop due to
> contention."
> But this behavior keep changing since kernel changes in this area is very
> actively happened in past. So I don't know the exact details about kernel
> version etc.
> I have similar setup (16 SSDs) and I will try similar test on latest kernel.
>
> BTW - I remember that rq_affinity=2 play major role in such issue. I
> usually do testing with rq_affinity = 2.
>
Hi Kashyap,
As requested:
rq_affinity=1, nomerges=0 (default)
sdev queue depth num jobs=1
8 1650
16 1638
32 1612
64 1573
254 1435 (default for LLDD)
rq_affinity=2, nomerges=2
sdev queue depth num jobs=1
8 1236
16 1423
32 1438
64 1438
254 1438 (default for LLDD)
Setup as original: fio read, 12x SAS SSDs
So, again, we see that performance changes from changing sdev queue
depth depends on workload and then also other queue config.
Thanks,
John
next prev parent reply other threads:[~2021-01-13 12:19 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-11 16:21 About scsi device queue depth John Garry
2021-01-11 16:40 ` James Bottomley
2021-01-11 17:11 ` John Garry
2021-01-12 6:35 ` James Bottomley
2021-01-12 10:27 ` John Garry
2021-01-12 16:40 ` Bryan Gurney
2021-01-12 16:47 ` James Bottomley
2021-01-12 17:20 ` Bryan Gurney
2021-01-11 17:31 ` Douglas Gilbert
2021-01-13 6:07 ` Martin K. Petersen
2021-01-13 6:36 ` Damien Le Moal
2021-01-12 1:42 ` Ming Lei
2021-01-12 8:56 ` John Garry
2021-01-12 9:06 ` Ming Lei
2021-01-12 9:23 ` John Garry
2021-01-12 11:44 ` Kashyap Desai
2021-01-13 12:17 ` John Garry [this message]
2021-01-13 13:34 ` Kashyap Desai
2021-01-13 15:39 ` John Garry
2021-01-12 17:44 ` John Garry
2021-01-12 7:23 ` Hannes Reinecke
2021-01-12 9:15 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=16a66e96-a08f-78d1-155a-41bb5d31f2d1@huawei.com \
--to=john.garry@huawei.com \
--cc=bvanassche@acm.org \
--cc=chenxiang66@hisilicon.com \
--cc=hare@suse.com \
--cc=hch@lst.de \
--cc=jejb@linux.ibm.com \
--cc=kashyap.desai@broadcom.com \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
--cc=mpt-fusionlinux.pdl@broadcom.com \
--cc=sathya.prakash@broadcom.com \
--cc=sreekanth.reddy@broadcom.com \
--cc=suganath-prabu.subramani@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox