From: Ming Lei <ming.lei@redhat.com>
To: Bart Van Assche <bvanassche@acm.org>
Cc: Hannes Reinecke <hare@suse.de>, Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org,
"James E . J . Bottomley" <jejb@linux.ibm.com>,
"Martin K . Petersen" <martin.petersen@oracle.com>,
linux-scsi@vger.kernel.org,
Sathya Prakash <sathya.prakash@broadcom.com>,
Chaitra P B <chaitra.basappa@broadcom.com>,
Suganath Prabu Subramani <suganath-prabu.subramani@broadcom.com>,
Kashyap Desai <kashyap.desai@broadcom.com>,
Sumit Saxena <sumit.saxena@broadcom.com>,
Shivasharan S <shivasharan.srikanteshwara@broadcom.com>,
"Ewan D . Milne" <emilne@redhat.com>,
Christoph Hellwig <hch@lst.de>,
Bart Van Assche <bart.vanassche@wdc.com>
Subject: Re: [PATCH 4/4] scsi: core: don't limit per-LUN queue depth for SSD
Date: Sat, 23 Nov 2019 06:00:31 +0800 [thread overview]
Message-ID: <20191122220031.GC8700@ming.t460p> (raw)
In-Reply-To: <5f84476f-95b4-79b6-f72d-4e2de447065c@acm.org>
On Fri, Nov 22, 2019 at 10:14:51AM -0800, Bart Van Assche wrote:
> On 11/22/19 12:09 AM, Ming Lei wrote:
> > On Thu, Nov 21, 2019 at 04:45:48PM +0100, Hannes Reinecke wrote:
> > > On 11/21/19 1:53 AM, Ming Lei wrote:
> > > > On Wed, Nov 20, 2019 at 11:05:24AM +0100, Hannes Reinecke wrote:
> > > > > I would far prefer if we could delegate any queueing decision to the
> > > > > elevators, and completely drop the device_busy flag for all devices.
> > > >
> > > > If you drop it, you may create big sequential IO performance drop
> > > > on HDD., that is why this patch only bypasses sdev->queue_depth on
> > > > SSD. NVMe bypasses it because no one uses HDD. via NVMe.
> > > >
> > > I still wonder how much performance drop we actually see; what seems to
> > > happen is that device_busy just arbitrary pushes back to the block
> > > layer, giving it more time to do merging.
> > > I do think we can do better then that...
> >
> > For example, running the following script[1] on 4-core VM:
> >
> > ------------------------------------------
> > | QD:255 | QD: 32 |
> > ------------------------------------------
> > fio read throughput | 825MB/s | 1432MB/s|
> > ------------------------------------------
> >
> > [ ... ]
>
> Hi Ming,
>
> Thanks for having shared these numbers. I think this is very useful
> information. Do these results show the performance drop that happens if
> /sys/block/.../device/queue_depth exceeds .can_queue? What I am wondering
The above test just shows that IO merge plays important role here, and
one important point for triggering IO merge is that .get_budget returns
false.
If sdev->queue_depth is too big, .get_budget may never return false.
That is why this patch just bypasses .device_busy for SSD.
> about is how important these results are in the context of this discussion.
> Are there any modern SCSI devices for which a SCSI LLD sets
> scsi_host->can_queue and scsi_host->cmd_per_lun such that the device
> responds with BUSY? What surprised me is that only three SCSI LLDs call
There are many such HBAs, for which sdev->queue_depth is smaller than
.can_queue, especially in case of small number of LUNs.
> scsi_track_queue_full() (mptsas, bfa, esp_scsi). Does that mean that BUSY
> responses from a SCSI device or HBA are rare?
It is only true for some HBAs.
thanks,
Ming
next prev parent reply other threads:[~2019-11-22 22:00 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-18 10:31 [PATCH 0/4] scis: don't apply per-LUN queue depth for SSD Ming Lei
2019-11-18 10:31 ` [PATCH 1/4] scsi: megaraid_sas: use private counter for tracking inflight per-LUN commands Ming Lei
2019-11-20 9:54 ` Hannes Reinecke
2019-11-26 3:12 ` Kashyap Desai
2019-11-26 3:37 ` Ming Lei
2019-12-05 10:32 ` Kashyap Desai
2019-11-18 10:31 ` [PATCH 2/4] scsi: mpt3sas: " Ming Lei
2019-11-20 9:55 ` Hannes Reinecke
2019-11-18 10:31 ` [PATCH 3/4] scsi: sd: register request queue after sd_revalidate_disk is done Ming Lei
2019-11-20 9:59 ` Hannes Reinecke
2019-11-18 10:31 ` [PATCH 4/4] scsi: core: don't limit per-LUN queue depth for SSD Ming Lei
2019-11-20 10:05 ` Hannes Reinecke
2019-11-20 17:00 ` Ewan D. Milne
2019-11-20 20:56 ` Bart Van Assche
2019-11-20 21:36 ` Ewan D. Milne
2019-11-22 2:25 ` Martin K. Petersen
2019-11-21 1:07 ` Ming Lei
2019-11-22 2:59 ` Martin K. Petersen
2019-11-22 3:24 ` Ming Lei
2019-11-22 16:38 ` Sumanesh Samanta
2019-11-21 0:08 ` Sumanesh Samanta
2019-11-21 0:54 ` Ming Lei
2019-11-21 19:19 ` Ewan D. Milne
2019-11-21 0:53 ` Ming Lei
2019-11-21 15:45 ` Hannes Reinecke
2019-11-22 8:09 ` Ming Lei
2019-11-22 18:14 ` Bart Van Assche
2019-11-22 18:26 ` James Smart
2019-11-22 20:46 ` Bart Van Assche
2019-11-22 22:04 ` Ming Lei
2019-11-22 22:00 ` Ming Lei [this message]
2019-11-25 18:28 ` Ewan D. Milne
2019-11-25 22:14 ` James Smart
2019-11-22 2:18 ` Martin K. Petersen
-- strict thread matches above, loose matches on Subject: below --
2019-11-20 21:58 Sumanesh Samanta
2019-11-21 1:21 ` Ming Lei
2019-11-21 1:50 ` Sumanesh Samanta
2019-11-21 2:23 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191122220031.GC8700@ming.t460p \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=bart.vanassche@wdc.com \
--cc=bvanassche@acm.org \
--cc=chaitra.basappa@broadcom.com \
--cc=emilne@redhat.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jejb@linux.ibm.com \
--cc=kashyap.desai@broadcom.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=sathya.prakash@broadcom.com \
--cc=shivasharan.srikanteshwara@broadcom.com \
--cc=suganath-prabu.subramani@broadcom.com \
--cc=sumit.saxena@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox