From: Bart Van Assche <bvanassche@acm.org>
To: Keith Busch <keith.busch@intel.com>
Cc: t@localhost.localdomain, linux-nvme@lists.infradead.org,
linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH 1/5] blk-mq: Export reading mq request state
Date: Fri, 08 Mar 2019 12:47:10 -0800 [thread overview]
Message-ID: <1552078030.45180.88.camel@acm.org> (raw)
In-Reply-To: <20190308191954.GC5232@localhost.localdomain>
On Fri, 2019-03-08 at 12:19 -0700, Keith Busch wrote:
> On Fri, Mar 08, 2019 at 10:42:17AM -0800, Bart Van Assche wrote:
> > I think that the NVMe spec provides a more elegant mechanism,
> > namely deleting the I/O submission queues. According to what I read in the
> > 1.3c spec deleting an I/O submission queue forces an NVMe controller to post a
> > completion for every outstanding request. See also section 5.6 in the NVMe
> > 1.3c spec.
>
> That's actually not what it says. The controller may or may not post a
> completion entry with a delete SQ command. The first behavior is defined
> in the spec as "explicit" and the second as "implicit". For implicit,
> we have to iterate inflight tags.
Hi Keith,
Thanks for the clarification. Are you aware of any mechanism in the NVMe spec
that causes all outstanding requests to fail? With RDMA this is easy - all
one has to do is to change the queue pair state into IB_QPS_ERR. See also
ib_drain_qp() in the RDMA core.
If no such mechanism has been defined in the NVMe spec: have you considered
to cancel all outstanding requests instead of calling blk_mq_end_request() for
all outstanding requests?
Thanks,
Bart.
next prev parent reply other threads:[~2019-03-08 20:47 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-08 17:40 [PATCH 1/5] blk-mq: Export reading mq request state Keith Busch
2019-03-08 17:40 ` [PATCH 2/5] blk-mq: Export iterating queue requests Keith Busch
2019-03-08 18:08 ` Bart Van Assche
2019-03-08 18:13 ` Keith Busch
2019-03-08 17:40 ` [PATCH 3/5] blk-mq: Iterate tagset over all requests Keith Busch
2019-03-08 17:40 ` [PATCH 4/5] nvme: Fail dead namespace's entered requests Keith Busch
2019-03-08 18:15 ` Bart Van Assche
2019-03-08 18:19 ` Keith Busch
2019-03-08 21:54 ` Bart Van Assche
2019-03-08 22:06 ` Keith Busch
2019-03-11 3:58 ` jianchao.wang
2019-03-11 15:42 ` Keith Busch
2019-03-08 17:40 ` [PATCH 5/5] nvme/pci: Remove queue IO flushing hack Keith Busch
2019-03-08 18:19 ` Bart Van Assche
2019-03-11 18:40 ` Christoph Hellwig
2019-03-11 19:37 ` Keith Busch
2019-03-27 8:31 ` Christoph Hellwig
2019-03-27 13:21 ` Keith Busch
2019-03-28 1:42 ` jianchao.wang
2019-03-28 3:33 ` Keith Busch
2019-03-08 18:07 ` [PATCH 1/5] blk-mq: Export reading mq request state Bart Van Assche
2019-03-08 18:15 ` Keith Busch
2019-03-08 18:42 ` Bart Van Assche
2019-03-08 19:19 ` Keith Busch
2019-03-08 20:47 ` Bart Van Assche [this message]
2019-03-08 21:14 ` Keith Busch
2019-03-08 21:25 ` Bart Van Assche
2019-03-08 21:31 ` Keith Busch
2019-03-08 20:21 ` Sagi Grimberg
2019-03-08 20:29 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1552078030.45180.88.camel@acm.org \
--to=bvanassche@acm.org \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=t@localhost.localdomain \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).