From: Chengfeng Ye <dg573847474@gmail.com>
To: hch@lst.de, sagi@grimberg.me, kch@nvidia.com
Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
Chengfeng Ye <dg573847474@gmail.com>
Subject: [PATCH] nvmet-rdma: use spin_lock_bh() on rsp_wr_wait_lock
Date: Tue, 26 Sep 2023 17:22:08 +0000 [thread overview]
Message-ID: <20230926172208.55478-1-dg573847474@gmail.com> (raw)
It seems to me that read_cqe.done could be executed under softirq
context, as done callbacks always do, and it acquires rsp_wr_wait_lock
along the following call chain.
nvmet_rdma_read_data_done()
--> nvmet_rdma_release_rsp()
--> spin_lock(&queue->rsp_wr_wait_lock)
So it seems more reasonable to use spin_lock_bh() on it, otherwise
there could be following potential deadlocks.
nvmet_rdma_queue_response()
--> nvmet_rdma_release_rsp()
--> spin_lock(&queue->rsp_wr_wait_lock)
<interrupt>
--> nvmet_rdma_read_data_done()
--> nvmet_rdma_release_rsp()
--> spin_lock(&queue->rsp_wr_wait_lock)
nvmet_rdma_cm_handler()
--> nvmet_rdma_handle_command()
--> spin_lock(&queue->rsp_wr_wait_lock)
<interrupt>
--> nvmet_rdma_read_data_done()
--> nvmet_rdma_release_rsp()
--> spin_lock(&queue->rsp_wr_wait_lock)
Signed-off-by: Chengfeng Ye <dg573847474@gmail.com>
---
drivers/nvme/target/rdma.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4597bca43a6d..a01ed29fbd8a 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -520,7 +520,7 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev,
static void nvmet_rdma_process_wr_wait_list(struct nvmet_rdma_queue *queue)
{
- spin_lock(&queue->rsp_wr_wait_lock);
+ spin_lock_bh(&queue->rsp_wr_wait_lock);
while (!list_empty(&queue->rsp_wr_wait_list)) {
struct nvmet_rdma_rsp *rsp;
bool ret;
@@ -529,16 +529,16 @@ static void nvmet_rdma_process_wr_wait_list(struct nvmet_rdma_queue *queue)
struct nvmet_rdma_rsp, wait_list);
list_del(&rsp->wait_list);
- spin_unlock(&queue->rsp_wr_wait_lock);
+ spin_unlock_bh(&queue->rsp_wr_wait_lock);
ret = nvmet_rdma_execute_command(rsp);
- spin_lock(&queue->rsp_wr_wait_lock);
+ spin_lock_bh(&queue->rsp_wr_wait_lock);
if (!ret) {
list_add(&rsp->wait_list, &queue->rsp_wr_wait_list);
break;
}
}
- spin_unlock(&queue->rsp_wr_wait_lock);
+ spin_unlock_bh(&queue->rsp_wr_wait_lock);
}
static u16 nvmet_rdma_check_pi_status(struct ib_mr *sig_mr)
@@ -994,9 +994,9 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
goto out_err;
if (unlikely(!nvmet_rdma_execute_command(cmd))) {
- spin_lock(&queue->rsp_wr_wait_lock);
+ spin_lock_bh(&queue->rsp_wr_wait_lock);
list_add_tail(&cmd->wait_list, &queue->rsp_wr_wait_list);
- spin_unlock(&queue->rsp_wr_wait_lock);
+ spin_unlock_bh(&queue->rsp_wr_wait_lock);
}
return;
--
2.17.1
next reply other threads:[~2023-09-26 17:22 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-26 17:22 Chengfeng Ye [this message]
2023-09-26 19:55 ` [PATCH] nvmet-rdma: use spin_lock_bh() on rsp_wr_wait_lock Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230926172208.55478-1-dg573847474@gmail.com \
--to=dg573847474@gmail.com \
--cc=hch@lst.de \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox