* [PATCH] Use rsps_lock in nvmet_rdma_free_rsp
@ 2016-11-22 5:32 Tomita.Haruo
0 siblings, 0 replies; 5+ messages in thread
From: Tomita.Haruo @ 2016-11-22 5:32 UTC (permalink / raw)
Hi Christoph and Sagi,
Without using rsps_lock in nvmet_rdma_free_rsp, free_list is being operated.
The following patch, this issue fixed. Is this patch right?
rdma.c | 3 +++
1 file changed, 3 insertions(+)
--- linux-4.9-rc6/drivers/nvme/target/rdma.c.org 2016-11-21 06:52:19.000000000 +0900
+++ linux-4.9-rc6/drivers/nvme/target/rdma.c 2016-11-22 14:04:59.387510444 +0900
@@ -423,11 +423,14 @@ static void nvmet_rdma_free_rsps(struct
{
struct nvmet_rdma_device *ndev = queue->dev;
int i, nr_rsps = queue->recv_queue_size * 2;
+ unsigned long flags;
for (i = 0; i < nr_rsps; i++) {
+ spin_lock_irqsave(&queue->rsps_lock, flags);
struct nvmet_rdma_rsp *rsp = &queue->rsps[i];
list_del(&rsp->free_list);
+ spin_unlock_irqrestore(&queue->rsps_lock, flags);
nvmet_rdma_free_rsp(ndev, rsp);
}
kfree(queue->rsps);
thanks
--
Haruo
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] Use rsps_lock in nvmet_rdma_free_rsp
@ 2016-11-22 5:36 Tomita.Haruo
2016-11-22 7:41 ` Christoph Hellwig
0 siblings, 1 reply; 5+ messages in thread
From: Tomita.Haruo @ 2016-11-22 5:36 UTC (permalink / raw)
Hi Christoph and Sagi,
Without using rsps_lock in nvmet_rdma_free_rsp, free_list is being operated.
The following patch, this issue fixed. Is this patch right?
rdma.c | 3 +++
1 file changed, 3 insertions(+)
--- linux-4.9-rc6/drivers/nvme/target/rdma.c.org 2016-11-21 06:52:19.000000000 +0900
+++ linux-4.9-rc6/drivers/nvme/target/rdma.c 2016-11-22 14:22:27.482438775 +0900
@@ -423,11 +423,14 @@ static void nvmet_rdma_free_rsps(struct
{
struct nvmet_rdma_device *ndev = queue->dev;
int i, nr_rsps = queue->recv_queue_size * 2;
+ unsigned long flags;
for (i = 0; i < nr_rsps; i++) {
struct nvmet_rdma_rsp *rsp = &queue->rsps[i];
+ spin_lock_irqsave(&queue->rsps_lock, flags);
list_del(&rsp->free_list);
+ spin_unlock_irqrestore(&queue->rsps_lock, flags);
nvmet_rdma_free_rsp(ndev, rsp);
}
kfree(queue->rsps);
thanks
--
Haruo
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] Use rsps_lock in nvmet_rdma_free_rsp
2016-11-22 5:36 [PATCH] Use rsps_lock in nvmet_rdma_free_rsp Tomita.Haruo
@ 2016-11-22 7:41 ` Christoph Hellwig
2016-11-25 5:00 ` Tomita.Haruo
0 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2016-11-22 7:41 UTC (permalink / raw)
Hi Tomita,
what are you trying to protect against during device teardown?
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] Use rsps_lock in nvmet_rdma_free_rsp
2016-11-22 7:41 ` Christoph Hellwig
@ 2016-11-25 5:00 ` Tomita.Haruo
2016-11-25 7:53 ` Christoph Hellwig
0 siblings, 1 reply; 5+ messages in thread
From: Tomita.Haruo @ 2016-11-25 5:00 UTC (permalink / raw)
Hi Christoph,
Thank you for your reply.
> what are you trying to protect against during device teardown?
I'm trying to protect nvmet_rdma_get_rsp and nvmet_rdma_put_rsp.
In these functions, queue->free_rsps is protected by rsps_lock.
rsp->free_list is also protected at the same time.
I am investigating whether there is a race issue on the target of nvme.
For another function, I also think of the following patch.
--- linux-4.9-rc6/drivers/nvme/target/rdma.c.orig 2016-11-25 06:51:06.000000000 +0900
+++ linux-4.9-rc6/drivers/nvme/target/rdma.c 2016-11-25 06:55:53.000000000 +0900
@@ -724,6 +724,7 @@ static void nvmet_rdma_recv_done(struct
container_of(wc->wr_cqe, struct nvmet_rdma_cmd, cqe);
struct nvmet_rdma_queue *queue = cq->cq_context;
struct nvmet_rdma_rsp *rsp;
+ unsigned long flags;
if (unlikely(wc->status != IB_WC_SUCCESS)) {
if (wc->status != IB_WC_WR_FLUSH_ERR) {
@@ -747,10 +748,9 @@ static void nvmet_rdma_recv_done(struct
rsp->flags = 0;
rsp->req.cmd = cmd->nvme_cmd;
+ spin_lock_irqsave(&queue->state_lock, flags);
if (unlikely(queue->state != NVMET_RDMA_Q_LIVE)) {
- unsigned long flags;
- spin_lock_irqsave(&queue->state_lock, flags);
if (queue->state == NVMET_RDMA_Q_CONNECTING)
list_add_tail(&rsp->wait_list, &queue->rsp_wait_list);
else
@@ -758,6 +758,7 @@ static void nvmet_rdma_recv_done(struct
spin_unlock_irqrestore(&queue->state_lock, flags);
return;
}
+ spin_unlock_irqrestore(&queue->state_lock, flags);
nvmet_rdma_handle_command(queue, rsp);
}
--
Haruo
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] Use rsps_lock in nvmet_rdma_free_rsp
2016-11-25 5:00 ` Tomita.Haruo
@ 2016-11-25 7:53 ` Christoph Hellwig
0 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2016-11-25 7:53 UTC (permalink / raw)
On Fri, Nov 25, 2016@05:00:26AM +0000, Tomita.Haruo@toshiba-sol.co.jp wrote:
> Hi Christoph,
>
> Thank you for your reply.
>
> > what are you trying to protect against during device teardown?
>
> I'm trying to protect nvmet_rdma_get_rsp and nvmet_rdma_put_rsp.
> In these functions, queue->free_rsps is protected by rsps_lock.
> rsp->free_list is also protected at the same time.
If we can invoke these functions concurrently with nvmet_rdma_free_rsp
we have sever life time rule error. Under what circumstances do you
see such concurrency?
> I am investigating whether there is a race issue on the target of nvme.
> For another function, I also think of the following patch.
What do you try to protect against with this patch?
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-11-25 7:53 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-22 5:36 [PATCH] Use rsps_lock in nvmet_rdma_free_rsp Tomita.Haruo
2016-11-22 7:41 ` Christoph Hellwig
2016-11-25 5:00 ` Tomita.Haruo
2016-11-25 7:53 ` Christoph Hellwig
-- strict thread matches above, loose matches on Subject: below --
2016-11-22 5:32 Tomita.Haruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).