From: Nilay Shroff <nilay@linux.ibm.com>
To: linux-nvme@lists.infradead.org
Cc: kbusch@kernel.org, hch@lst.de, sagi@grimberg.me, axboe@fb.com,
chaitanyak@nvidia.com, dlemoal@kernel.org, gjoyce@linux.ibm.com,
Nilay Shroff <nilay@linux.ibm.com>
Subject: [PATCH v3 1/3] nvme-loop: flush off pending I/O while shutting down loop controller
Date: Tue, 8 Oct 2024 11:51:50 +0530 [thread overview]
Message-ID: <20241008062210.1094083-2-nilay@linux.ibm.com> (raw)
In-Reply-To: <20241008062210.1094083-1-nilay@linux.ibm.com>
While shutting down loop controller, we first quiesce the admin/IO queue,
delete the admin/IO tag-set and then at last destroy the admin/IO queue.
However it's quite possible that during the window between quiescing and
destroying of the admin/IO queue, some admin/IO request might sneak in
and if that happens then we could potentially encounter a hung task
because shutdown operation can't forward progress until any pending I/O
is flushed off.
This commit helps ensure that before destroying the admin/IO queue, we
unquiesce the admin/IO queue so that any outstanding requests, which are
added after the admin/IO queue is quiesced, are now flushed to its
completion.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/target/loop.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index e32790d8fc26..a9d112d34d4f 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -265,6 +265,13 @@ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)
{
if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))
return;
+ /*
+ * It's possible that some requests might have been added
+ * after admin queue is stopped/quiesced. So now start the
+ * queue to flush these requests to the completion.
+ */
+ nvme_unquiesce_admin_queue(&ctrl->ctrl);
+
nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
nvme_remove_admin_tag_set(&ctrl->ctrl);
}
@@ -297,6 +304,12 @@ static void nvme_loop_destroy_io_queues(struct nvme_loop_ctrl *ctrl)
nvmet_sq_destroy(&ctrl->queues[i].nvme_sq);
}
ctrl->ctrl.queue_count = 1;
+ /*
+ * It's possible that some requests might have been added
+ * after io queue is stopped/quiesced. So now start the
+ * queue to flush these requests to the completion.
+ */
+ nvme_unquiesce_io_queues(&ctrl->ctrl);
}
static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)
--
2.45.2
next prev parent reply other threads:[~2024-10-08 6:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-08 6:21 [PATCH v3 0/3] nvme: system fault while shutting down fabric controller Nilay Shroff
2024-10-08 6:21 ` Nilay Shroff [this message]
2024-10-08 6:21 ` [PATCH v3 2/3] nvme: make keep-alive synchronous operation Nilay Shroff
2024-10-08 7:04 ` Christoph Hellwig
2024-10-08 17:46 ` Nilay Shroff
2024-10-08 6:21 ` [PATCH v3 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function Nilay Shroff
2024-10-08 7:06 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241008062210.1094083-2-nilay@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=axboe@fb.com \
--cc=chaitanyak@nvidia.com \
--cc=dlemoal@kernel.org \
--cc=gjoyce@linux.ibm.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox