From: James Smart <jsmart2021@gmail.com>
To: Damien Le Moal <damien.lemoal@opensource.wdc.com>,
Keith Busch <kbusch@kernel.org>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: Sagi Grimberg <sagi@grimberg.me>, "hch@lst.de" <hch@lst.de>,
"martin.petersen@oracle.com" <martin.petersen@oracle.com>,
"dgilbert@interlog.com" <dgilbert@interlog.com>,
Hannes Reinecke <hare@suse.de>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"lsf-pc@lists.linuxfoundation.org"
<lsf-pc@lists.linuxfoundation.org>
Subject: Re: [LSF/MM/BPF BOF] Userspace command abouts
Date: Sat, 25 Feb 2023 08:14:49 -0800 [thread overview]
Message-ID: <4feead4e-2cb5-a6bd-0db8-a4fe08b8efc6@gmail.com> (raw)
In-Reply-To: <0fe59301-65e6-d8a9-033e-0243ad59c56b@opensource.wdc.com>
On 2/24/2023 8:15 PM, Damien Le Moal wrote:
> On 2/25/23 10:51, Keith Busch wrote:
>> On Fri, Feb 24, 2023 at 11:54:39PM +0000, Chaitanya Kulkarni wrote:
>>> I do think that we should work on CDL for NVMe as it will solve some of
>>> the timeout related problems effectively than using aborts or any other
>>> mechanism.
>>
>> That proposal exists in NVMe TWG, but doesn't appear to have recent activity.
>> The last I heard, one point of contention was where the duration limit property
>> exists: within the command, or the queue. From my perspective, if it's not at
>> the queue level, the limit becomes meaningless, but hey, it's not up to me.
>
> Limit attached to the command makes things more flexible and easier for the
> host, so personally, I prefer that. But this has an impact on the controller:
> the device needs to pull in *all* commands to be able to know the limits and do
> scheduling/aborts appropriately. That is not something that the device designers
> like, for obvious reasons (device internal resources...).
>
> On the other hand, limits attached to queues could lead to either a serious
> increase in the number of queues (PCI space & number of IRQ vectors limits), or,
> loss of performance as a particular queue with the desired limit would be
> accessed from multiple CPUs on the host (lock contention). Tricky problem I
> think with lots of compromises.
>
From a fabrics perspective:
- at the command: is workable. However, the times are distorted as it
won't include fabric transmission time of the cmd or rsp, nor any
retransission of cmd xmt or rsp xmt under the fabric protecting against
loss.
- at the queue: is not workable. It effectively becomes a host transport
timer as the cdl has to cover all fabric transmission times and the only
entity that can time/enforce the timer is the host transport. Also, what
does the host transport do when the timer expires ? there are only a
couple of things it can do, all of them disruptive and at best delaying
the response back to the caller.
- CDL can only be meaningful (ie completion times close to cdl) in the
absence of transport errors. Cmd termination, perhaps tied with
connection loss/failure detection as well as connection/queue
termination or or association termination - can have timers that are
well above the CDL value. Any cmd completion guarantee within time-X
can become meaningless.
-- james
next prev parent reply other threads:[~2023-02-25 16:14 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-16 11:50 [LSF/MM/BPF BOF] Userspace command abouts Hannes Reinecke
2023-02-16 16:40 ` Keith Busch
2023-02-17 18:53 ` Chaitanya Kulkarni
2023-02-18 9:50 ` [LSF/MM/BPF BOF] Userspace command aborts Hannes Reinecke
2023-02-21 18:15 ` Chaitanya Kulkarni
2023-02-20 11:24 ` [LSF/MM/BPF BOF] Userspace command abouts Sagi Grimberg
2023-02-21 16:25 ` Douglas Gilbert
2023-02-22 14:37 ` Sagi Grimberg
2023-02-22 14:53 ` Keith Busch
2023-02-23 15:35 ` Sagi Grimberg
2023-02-24 23:54 ` Chaitanya Kulkarni
2023-02-25 1:51 ` Keith Busch
2023-02-25 4:15 ` Damien Le Moal
2023-02-25 16:14 ` James Smart [this message]
2023-02-27 16:33 ` Sagi Grimberg
2023-02-27 17:28 ` Hannes Reinecke
2023-02-27 17:44 ` Keith Busch
2023-02-27 21:18 ` Damien Le Moal
2023-02-27 21:42 ` Damien Le Moal
2023-02-28 8:05 ` Sagi Grimberg
2023-02-27 21:17 ` Damien Le Moal
2023-02-27 8:20 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4feead4e-2cb5-a6bd-0db8-a4fe08b8efc6@gmail.com \
--to=jsmart2021@gmail.com \
--cc=chaitanyak@nvidia.com \
--cc=damien.lemoal@opensource.wdc.com \
--cc=dgilbert@interlog.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linuxfoundation.org \
--cc=martin.petersen@oracle.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).