From: Sagi Grimberg <sagi@grimberg.me>
To: linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
Keith Busch <kbusch@kernel.org>
Cc: James Smart <james.smart@broadcom.com>
Subject: [PATCH] nvme-fabrics: allow to queue requests for live queues
Date: Mon, 27 Jul 2020 22:35:23 -0700 [thread overview]
Message-ID: <20200728053523.21657-1-sagi@grimberg.me> (raw)
Right now we are failing requests based on the controller
state (which is checked inline in nvmf_check_ready) however
we should definitely accept requests if the queue is live.
When entering controller reset, we transition the controller
into NVME_CTRL_RESETTING, and then return BLK_STS_RESOURCE for
non-mpath requests (have blk_noretry_request set).
This is also the case for NVME_REQ_USER for some reason, but
there shouldn't be any reason for us to reject this I/O in a
controller reset.
In a non-mpath setup, this means that the requests will simply
be requeued over and over forever not allowing the q_usage_counter
to drop its final reference, causing controller reset to hang
if running concurrently with heavy I/O.
While we are at it, remove the redundant NVME_CTRL_NEW case, which
should never see any I/O as it must first transition to
NVME_CTRL_CONNECTING.
Fixes: 35897b920c8a ("nvme-fabrics: fix and refine state checks in __nvmf_check_ready")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/host/fabrics.c | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 4ec4829d6233..2e7838f42e36 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -564,21 +564,13 @@ bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
{
struct nvme_request *req = nvme_req(rq);
- /*
- * If we are in some state of setup or teardown only allow
- * internally generated commands.
- */
- if (!blk_rq_is_passthrough(rq) || (req->flags & NVME_REQ_USERCMD))
- return false;
-
/*
* Only allow commands on a live queue, except for the connect command,
* which is require to set the queue live in the appropinquate states.
*/
switch (ctrl->state) {
- case NVME_CTRL_NEW:
case NVME_CTRL_CONNECTING:
- if (nvme_is_fabrics(req->cmd) &&
+ if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
req->cmd->fabrics.fctype == nvme_fabrics_type_connect)
return true;
break;
--
2.25.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next reply other threads:[~2020-07-28 5:35 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-28 5:35 Sagi Grimberg [this message]
2020-07-28 6:44 ` [PATCH] nvme-fabrics: allow to queue requests for live queues Chao Leng
2020-07-28 6:49 ` Sagi Grimberg
2020-07-28 17:11 ` James Smart
2020-07-28 17:50 ` Sagi Grimberg
2020-07-28 20:11 ` James Smart
2020-07-28 20:38 ` Sagi Grimberg
2020-07-28 22:47 ` James Smart
2020-07-28 23:39 ` Sagi Grimberg
2020-07-28 10:50 ` Christoph Hellwig
2020-07-28 16:50 ` James Smart
2020-07-29 5:45 ` Christoph Hellwig
2020-07-29 5:53 ` Sagi Grimberg
2020-07-29 6:05 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200728053523.21657-1-sagi@grimberg.me \
--to=sagi@grimberg.me \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox