From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>,
Damien Le Moal <damien.lemoal@opensource.wdc.com>,
Keith Busch <kbusch@kernel.org>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: "hch@lst.de" <hch@lst.de>,
"martin.petersen@oracle.com" <martin.petersen@oracle.com>,
"dgilbert@interlog.com" <dgilbert@interlog.com>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"lsf-pc@lists.linuxfoundation.org"
<lsf-pc@lists.linuxfoundation.org>
Subject: Re: [LSF/MM/BPF BOF] Userspace command abouts
Date: Mon, 27 Feb 2023 18:28:41 +0100 [thread overview]
Message-ID: <3ea301b1-c808-ce08-8ec8-3a631b385fb9@suse.de> (raw)
In-Reply-To: <316431ed-1727-7e80-2090-84ac5b334f74@grimberg.me>
On 2/27/23 17:33, Sagi Grimberg wrote:
>
>>> On Fri, Feb 24, 2023 at 11:54:39PM +0000, Chaitanya Kulkarni wrote:
>>>> I do think that we should work on CDL for NVMe as it will solve some of
>>>> the timeout related problems effectively than using aborts or any other
>>>> mechanism.
>>>
>>> That proposal exists in NVMe TWG, but doesn't appear to have recent
>>> activity.
>>> The last I heard, one point of contention was where the duration
>>> limit property
>>> exists: within the command, or the queue. From my perspective, if
>>> it's not at
>>> the queue level, the limit becomes meaningless, but hey, it's not up
>>> to me.
>>
>> Limit attached to the command makes things more flexible and easier
>> for the
>> host, so personally, I prefer that. But this has an impact on the
>> controller:
>> the device needs to pull in *all* commands to be able to know the
>> limits and do
>> scheduling/aborts appropriately. That is not something that the device
>> designers
>> like, for obvious reasons (device internal resources...).
>>
>> On the other hand, limits attached to queues could lead to either a
>> serious
>> increase in the number of queues (PCI space & number of IRQ vectors
>> limits), or,
>> loss of performance as a particular queue with the desired limit would be
>> accessed from multiple CPUs on the host (lock contention). Tricky
>> problem I
>> think with lots of compromises.
>
> I'm not up to speed on how CDL is defined, but I'm unclear how CDL at
> the queue level would cause the host to open more queues?
>
> Another question, does CDL have any relationship with NVMe "Time Limited
> Error Recovery"? where the host can set a feature for timeout and
> indicate if the controller should respect it per command?
>
> While this is not a full-blown every queue/command has its own timeout,
> it could address the original use-case given by Hannes. And it's already
> there.
I guess that is the NVMe version of CDLs; can you give me a reference
for it?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
next prev parent reply other threads:[~2023-02-27 17:28 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-16 11:50 [LSF/MM/BPF BOF] Userspace command abouts Hannes Reinecke
2023-02-16 16:40 ` Keith Busch
2023-02-17 18:53 ` Chaitanya Kulkarni
2023-02-18 9:50 ` [LSF/MM/BPF BOF] Userspace command aborts Hannes Reinecke
2023-02-21 18:15 ` Chaitanya Kulkarni
2023-02-20 11:24 ` [LSF/MM/BPF BOF] Userspace command abouts Sagi Grimberg
2023-02-21 16:25 ` Douglas Gilbert
2023-02-22 14:37 ` Sagi Grimberg
2023-02-22 14:53 ` Keith Busch
2023-02-23 15:35 ` Sagi Grimberg
2023-02-24 23:54 ` Chaitanya Kulkarni
2023-02-25 1:51 ` Keith Busch
2023-02-25 4:15 ` Damien Le Moal
2023-02-25 16:14 ` James Smart
2023-02-27 16:33 ` Sagi Grimberg
2023-02-27 17:28 ` Hannes Reinecke [this message]
2023-02-27 17:44 ` Keith Busch
2023-02-27 21:18 ` Damien Le Moal
2023-02-27 21:42 ` Damien Le Moal
2023-02-28 8:05 ` Sagi Grimberg
2023-02-27 21:17 ` Damien Le Moal
2023-02-27 8:20 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3ea301b1-c808-ce08-8ec8-3a631b385fb9@suse.de \
--to=hare@suse.de \
--cc=chaitanyak@nvidia.com \
--cc=damien.lemoal@opensource.wdc.com \
--cc=dgilbert@interlog.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linuxfoundation.org \
--cc=martin.petersen@oracle.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox