public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Kashyap Desai <kashyap.desai@broadcom.com>
To: John Garry <john.garry@huawei.com>, Ming Lei <ming.lei@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
	Hannes Reinecke <hare@suse.com>,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	"James E.J. Bottomley" <jejb@linux.ibm.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-scsi@vger.kernel.org,
	Sathya Prakash Veerichetty <sathya.prakash@broadcom.com>,
	Sreekanth Reddy <sreekanth.reddy@broadcom.com>,
	Suganath Prabu Subramani  <suganath-prabu.subramani@broadcom.com>,
	PDL-MPT-FUSIONLINUX <mpt-fusionlinux.pdl@broadcom.com>,
	chenxiang <chenxiang66@hisilicon.com>
Subject: RE: About scsi device queue depth
Date: Wed, 13 Jan 2021 19:04:34 +0530	[thread overview]
Message-ID: <0cd4b8de66fc7d7140a9c73d8e26327d@mail.gmail.com> (raw)
In-Reply-To: <16a66e96-a08f-78d1-155a-41bb5d31f2d1@huawei.com>

[-- Attachment #1: Type: text/plain, Size: 2608 bytes --]

>
> On 12/01/2021 11:44, Kashyap Desai wrote:
> >>>> loops = 10000
> >>> Is there any effect on random read IOPS when you decrease sdev queue
> >>> depth? For sequential IO, IO merge can be enhanced by that way.
> >>>
> >> Let me check...
> > John - Can you check your test with rq_affinity = 2  and nomerges=2 ?
> >
> > I have noticed similar drops (whatever you have reported) -  "once we
> > reach near pick of sdev or host queue depth, sometimes performance
> > drop due to contention."
> > But this behavior keep changing since kernel changes in this area is
> > very actively happened in past. So I don't know the exact details
> > about kernel version etc.
> > I have similar setup (16 SSDs) and I will try similar test on latest
> > kernel.
> >
> > BTW - I remember that rq_affinity=2 play major role in such issue.  I
> > usually do testing with rq_affinity = 2.
> >
>
> Hi Kashyap,
>
> As requested:
>
> rq_affinity=1, nomerges=0 (default)
>
> sdev queue depth	num jobs=1
> 8			1650
> 16			1638
> 32			1612
> 64			1573
> 254			1435 (default for LLDD)
>
> rq_affinity=2, nomerges=2
>
> sdev queue depth	num jobs=1
> 8			1236
> 16			1423
> 32			1438
> 64			1438
> 254			1438 (default for LLDD)
>
> Setup as original: fio read, 12x SAS SSDs

What is an issue with rq_affinity=2 and nomerges=2 ?
It looks like - "Dropping IOPs from peak (1.6M) is not an issue here but we
are not able to reach peak IOPs. IOPS increase gradually and it saturate."

>
> So, again, we see that performance changes from changing sdev queue depth
> depends on workload and then also other queue config.

Can you send me "dmesg" logs of your setup. What is host_busy counter while
you run your test ? I am not able to duplicate the issue on my setup.

To avoid any io merge issue, can we use rand_read data ?

Kashyap

>
> Thanks,
> John

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4169 bytes --]

  reply	other threads:[~2021-01-13 13:35 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 16:21 About scsi device queue depth John Garry
2021-01-11 16:40 ` James Bottomley
2021-01-11 17:11   ` John Garry
2021-01-12  6:35     ` James Bottomley
2021-01-12 10:27       ` John Garry
2021-01-12 16:40         ` Bryan Gurney
2021-01-12 16:47         ` James Bottomley
2021-01-12 17:20           ` Bryan Gurney
2021-01-11 17:31   ` Douglas Gilbert
2021-01-13  6:07   ` Martin K. Petersen
2021-01-13  6:36     ` Damien Le Moal
2021-01-12  1:42 ` Ming Lei
2021-01-12  8:56   ` John Garry
2021-01-12  9:06     ` Ming Lei
2021-01-12  9:23       ` John Garry
2021-01-12 11:44         ` Kashyap Desai
2021-01-13 12:17           ` John Garry
2021-01-13 13:34             ` Kashyap Desai [this message]
2021-01-13 15:39               ` John Garry
2021-01-12 17:44       ` John Garry
2021-01-12  7:23 ` Hannes Reinecke
2021-01-12  9:15   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0cd4b8de66fc7d7140a9c73d8e26327d@mail.gmail.com \
    --to=kashyap.desai@broadcom.com \
    --cc=bvanassche@acm.org \
    --cc=chenxiang66@hisilicon.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=jejb@linux.ibm.com \
    --cc=john.garry@huawei.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=ming.lei@redhat.com \
    --cc=mpt-fusionlinux.pdl@broadcom.com \
    --cc=sathya.prakash@broadcom.com \
    --cc=sreekanth.reddy@broadcom.com \
    --cc=suganath-prabu.subramani@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox