From: Hannes Reinecke <hare@suse.de>
To: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.de>,
linux-nvme@lists.infradead.org, Sagi Grimberg <sagi@grimberg.me>,
Keith Busch <keith.busch@wdc.com>,
James Smart <james.smart@broadcom.com>
Subject: [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations
Date: Tue, 22 Sep 2020 14:14:56 +0200 [thread overview]
Message-ID: <20200922121501.32851-3-hare@suse.de> (raw)
In-Reply-To: <20200922121501.32851-1-hare@suse.de>
When removing target ports all outstanding associations need to be
terminated / cleaned up. As this involves several exchanges on the
wire a synchronization point is required to establish when these
exchanges have ran their course and it's safe to delete the association.
So add a per-target workqueue and flush this workqueue to ensure
the association can really be deleted.
Signed-off-by: Hannes Reinecke <hare@suse.de>
---
drivers/nvme/target/fc.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 04ec0076ae59..63f5deb3b68a 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -99,6 +99,7 @@ struct nvmet_fc_tgtport {
struct list_head tgt_list; /* nvmet_fc_target_list */
struct device *dev; /* dev for dma mapping */
struct nvmet_fc_target_template *ops;
+ struct workqueue_struct *work_q;
struct nvmet_fc_ls_iod *iod;
spinlock_t lock;
@@ -1403,10 +1404,17 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
ida_init(&newrec->assoc_cnt);
newrec->max_sg_cnt = template->max_sgl_segments;
+ newrec->work_q = alloc_workqueue("ntfc%d", 0, 0,
+ newrec->fc_target_port.port_num);
+ if (!newrec->work_q) {
+ ret = -ENOMEM;
+ goto out_free_newrec;
+ }
+
ret = nvmet_fc_alloc_ls_iodlist(newrec);
if (ret) {
ret = -ENOMEM;
- goto out_free_newrec;
+ goto out_free_workq;
}
nvmet_fc_portentry_rebind_tgt(newrec);
@@ -1418,6 +1426,8 @@ nvmet_fc_register_targetport(struct nvmet_fc_port_info *pinfo,
*portptr = &newrec->fc_target_port;
return 0;
+out_free_workq:
+ destroy_workqueue(newrec->work_q);
out_free_newrec:
put_device(dev);
out_ida_put:
@@ -1443,6 +1453,8 @@ nvmet_fc_free_tgtport(struct kref *ref)
list_del(&tgtport->tgt_list);
spin_unlock_irqrestore(&nvmet_fc_tgtlock, flags);
+ destroy_workqueue(tgtport->work_q);
+
nvmet_fc_free_ls_iodlist(tgtport);
/* let the LLDD know we've finished tearing it down */
@@ -1481,11 +1493,13 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
&tgtport->assoc_list, a_list) {
if (!nvmet_fc_tgt_a_get(assoc))
continue;
- if (!schedule_work(&assoc->del_work))
+ if (!queue_work(tgtport->work_q, &assoc->del_work))
/* already deleting - release local reference */
nvmet_fc_tgt_a_put(assoc);
}
spin_unlock_irqrestore(&tgtport->lock, flags);
+
+ flush_workqueue(tgtport->work_q);
}
/**
@@ -1536,12 +1550,14 @@ nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
continue;
assoc->hostport->invalid = 1;
noassoc = false;
- if (!schedule_work(&assoc->del_work))
+ if (!queue_work(tgtport->work_q, &assoc->del_work))
/* already deleting - release local reference */
nvmet_fc_tgt_a_put(assoc);
}
spin_unlock_irqrestore(&tgtport->lock, flags);
+ flush_workqueue(tgtport->work_q);
+
/* if there's nothing to wait for - call the callback */
if (noassoc && tgtport->ops->host_release)
tgtport->ops->host_release(hosthandle);
@@ -1579,14 +1595,15 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
}
spin_unlock_irqrestore(&tgtport->lock, flags);
- nvmet_fc_tgtport_put(tgtport);
-
if (found_ctrl) {
- if (!schedule_work(&assoc->del_work))
+ if (!queue_work(tgtport->work_q, &assoc->del_work))
/* already deleting - release local reference */
nvmet_fc_tgt_a_put(assoc);
+ flush_workqueue(tgtport->work_q);
+ nvmet_fc_tgtport_put(tgtport);
return;
}
+ nvmet_fc_tgtport_put(tgtport);
spin_lock_irqsave(&nvmet_fc_tgtlock, flags);
}
--
2.16.4
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2020-09-22 12:15 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-22 12:14 [PATCH 0/7] nvme-fcloop: fix shutdown and improve logging Hannes Reinecke
2020-09-22 12:14 ` [PATCH 1/7] nvme-fcloop: flush workqueue before calling nvme_fc_unregister_remoteport() Hannes Reinecke
2020-10-05 17:14 ` James Smart
2020-09-22 12:14 ` Hannes Reinecke [this message]
2020-10-05 17:18 ` [PATCH 2/7] nvmet-fc: use per-target workqueue when removing associations James Smart
2020-09-22 12:14 ` [PATCH 3/7] nvme-fcloop: use IDA for port ids Hannes Reinecke
2020-10-05 17:33 ` James Smart
2020-10-09 13:57 ` Hannes Reinecke
2020-09-22 12:14 ` [PATCH 4/7] nvmet-fc: use feature flag for virtual LLDD Hannes Reinecke
2020-10-05 17:37 ` James Smart
2020-09-22 12:14 ` [PATCH 5/7] nvme-fc: " Hannes Reinecke
2020-10-05 17:38 ` James Smart
2020-09-22 12:15 ` [PATCH 6/7] nvme-fcloop: use a device for nport Hannes Reinecke
2020-10-05 17:41 ` James Smart
2020-09-22 12:15 ` [PATCH 7/7] nvme-fcloop: use a device for lport Hannes Reinecke
2020-10-05 17:45 ` James Smart
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200922121501.32851-3-hare@suse.de \
--to=hare@suse.de \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=keith.busch@wdc.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox