From: Sumanesh Samanta <sumanesh.samanta@broadcom.com>
To: linux-scsi@vger.kernel.org,
"Martin K. Petersen" <martin.petersen@oracle.com>
Subject: Re: [PATCH 5/6] scsi: core: don't limit per-LUN queue depth for SSD when HBA needs
Date: Thu, 23 Jan 2020 17:01:47 -0700 [thread overview]
Message-ID: <ab676c4c-03fb-7eb9-6212-129eb83d0ee8@broadcom.com> (raw)
In-Reply-To: <yq1y2u1if7t.fsf@oracle.com>
Hi Martin,
>> A free host tag does not guarantee that the target
Your point is absolutely correct for a single SSD device, and probably
for some low-end controllers, but not for high-end HBA that has its own
queuing mechanism.
The high-end controllers might expose a SCSI interface, but can have all
kind of devices (NVMe/SCSI/SATA) behind it, and has its own capability
to queue IO and feed to devices as needed. Those devices should not be
penalized with the overhead of the device_busy counter, just because
they chose to expose themselves has SCSI devices (for historical and
backward compatibility reasons). Rather they should be enabled, so that
they can compete with devices exposing themselves as NVMe devices.
It is those devices that this patch is meant for, and Ming has provided
a specific flag for it. For the devices that cannot tolerate more
outstanding IO, they need not set the flag, and will be unaffected.
In my humble opinion, the SCSI stack should be flexible enough to
support innovation and not limit some controllers, just because others
might have limited capability, especially when a whitelist flag is
provided so that such devices are unaffected.
sincerely,
Sumanesh
device can queue the command.
On 1/20/2020 9:52 PM, Martin K. Petersen wrote:
> Ming,
>
>> NVMe doesn't have such per-request-queue(namespace) queue depth, so it
>> is reasonable to ignore the limit for SCSI SSD too.
> It is really not. A free host tag does not guarantee that the target
> device can queue the command.
>
> The assumption that SSDs are somehow special because they are "fast" is
> not valid. Given the common hardware queue depth for a SAS device of
> ~128 it is often trivial to drive a device into a congestion
> scenario. We see it all the time for non-rotational devices, SSDs and
> arrays alike. The SSD heuristic is simply not going to fly.
>
> Don't get me wrong, I am very sympathetic to obliterating device_busy in
> the hot path. I just don't think it is as easy as just ignoring the
> counter and hope for the best. Dynamic queue depth management is an
> integral part of the SCSI protocol, not something we can just decide to
> bypass because a device claims to be of a certain media type or speed.
>
> I would prefer not to touch drivers that rely on cmd_per_lun / untagged
> operation and focus exclusively on the ones that use .track_queue_depth.
> For those we could consider an adaptive queue depth management scheme.
> Something like not maintaining device_busy until we actually get a
> QUEUE_FULL condition. And then rely on the existing queue depth ramp up
> heuristics to determine when to disable the busy counter again. Maybe
> with an additional watermark or time limit to avoid flip-flopping.
>
> If that approach turns out to work, we should convert all remaining
> non-legacy drivers to .track_queue_depth so we only have two driver
> queuing flavors to worry about.
>
next prev parent reply other threads:[~2020-01-24 0:01 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-19 7:14 [PATCH 0/6] scsi: support bypass device busy check for some high end HBA with SSD Ming Lei
2020-01-19 7:14 ` [PATCH 1/6] scsi: mpt3sas: don't use .device_busy in device reset routine Ming Lei
2020-01-19 20:28 ` Bart Van Assche
2020-01-19 7:14 ` [PATCH 2/6] scsi: remove .for_blk_mq Ming Lei
2020-01-19 20:29 ` Bart Van Assche
2020-01-20 10:17 ` John Garry
2020-01-20 22:12 ` Elliott, Robert (Servers)
2020-01-31 6:30 ` Christoph Hellwig
2020-02-05 2:13 ` Martin K. Petersen
2020-01-19 7:14 ` [PATCH 3/6] scsi: sd: register request queue after sd_revalidate_disk is done Ming Lei
2020-01-19 20:36 ` Bart Van Assche
2020-01-19 7:14 ` [PATCH 4/6] block: freeze queue for updating QUEUE_FLAG_NONROT Ming Lei
2020-01-19 20:40 ` Bart Van Assche
2020-01-19 7:14 ` [PATCH 5/6] scsi: core: don't limit per-LUN queue depth for SSD when HBA needs Ming Lei
2020-01-19 20:58 ` Bart Van Assche
2020-01-21 4:52 ` Martin K. Petersen
2020-01-23 2:54 ` Ming Lei
2020-01-24 1:21 ` Martin K. Petersen
2020-01-24 1:59 ` Ming Lei
2020-01-24 12:43 ` Sumit Saxena
2020-01-28 4:04 ` Martin K. Petersen
2020-01-24 0:01 ` Sumanesh Samanta [this message]
2020-01-24 1:58 ` Martin K. Petersen
2020-01-24 19:41 ` Sumanesh Samanta
2020-01-28 4:22 ` Martin K. Petersen
2020-01-31 11:39 ` Ming Lei
2020-01-19 7:14 ` [PATCH 6/6] scsi: megaraid: set flag of no_device_queue_for_ssd Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ab676c4c-03fb-7eb9-6212-129eb83d0ee8@broadcom.com \
--to=sumanesh.samanta@broadcom.com \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox