From: James Smart <jsmart2021@gmail.com>
To: linux-scsi@vger.kernel.org
Cc: James Smart <jsmart2021@gmail.com>,
Dick Kennedy <dick.kennedy@broadcom.com>,
James Smart <james.smart@broadcom.com>
Subject: [PATCH v2 01/17] lpfc: Add nvme initiator devloss support
Date: Thu, 1 Jun 2017 21:06:55 -0700 [thread overview]
Message-ID: <20170602040711.21046-2-jsmart2021@gmail.com> (raw)
In-Reply-To: <20170602040711.21046-1-jsmart2021@gmail.com>
Add nvme initiator devloss support
The existing implementation was based on no devloss behavior
in the transport (e.g. immediate teardown) so code didn't
properly handle delayed nvme rport device unregister calls.
In addition, the driver was not correctly cycling the rport
port role for each register-unregister-reregister process.
This patch does the following:
Rework the code to properly handle rport device unregister
calls and potential re-allocation of the remoteport structure
if the port comes back in under dev_loss_tmo.
Correct code that was incorrectly cycling the rport
port role for each register-unregister-reregister process.
Prep the code to enable calling the nvme_fc transport api
to dynamically update dev_loss_tmo when the scsi sysfs interface
changes it.
Memset the rpinfo structure in the registration call to enforce
"accept nvme transport defaults" in the registration call. Driver
parameters do influence the dev_loss_tmo transport setting
dynamically.
Simplifies the register function: the driver was incorrectly
searching its local rport list to determine resume or new semantics,
which is not valid as the transport already handles this. The rport
was resumed if the rport handed back matches the ndlp->nrport pointer.
Otherwise, devloss fired and the ndlp's nrport is NULL.
Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: James Smart <james.smart@broadcom.com>
---
drivers/scsi/lpfc/lpfc_attr.c | 7 ++-
drivers/scsi/lpfc/lpfc_nvme.c | 141 ++++++++++++++++--------------------------
2 files changed, 57 insertions(+), 91 deletions(-)
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index bb2d9e238225..37f258fcf6d4 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -3198,9 +3198,12 @@ lpfc_update_rport_devloss_tmo(struct lpfc_vport *vport)
shost = lpfc_shost_from_vport(vport);
spin_lock_irq(shost->host_lock);
- list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp)
- if (NLP_CHK_NODE_ACT(ndlp) && ndlp->rport)
+ list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ if (!NLP_CHK_NODE_ACT(ndlp))
+ continue;
+ if (ndlp->rport)
ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo;
+ }
spin_unlock_irq(shost->host_lock);
}
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index 8008c8205fb6..70675fd7d884 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -186,13 +186,13 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
/* Remove this rport from the lport's list - memory is owned by the
* transport. Remove the ndlp reference for the NVME transport before
- * calling state machine to remove the node, this is devloss = 0
- * semantics.
+ * calling state machine to remove the node.
*/
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
"6146 remoteport delete complete %p\n",
remoteport);
list_del(&rport->list);
+ ndlp->nrport = NULL;
lpfc_nlp_put(ndlp);
rport_err:
@@ -1466,7 +1466,7 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
/* The remote node has to be ready to send an abort. */
if ((ndlp->nlp_state != NLP_STE_MAPPED_NODE) &&
- !(ndlp->nlp_type & NLP_NVME_TARGET)) {
+ (ndlp->nlp_type & NLP_NVME_TARGET)) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_ABTS,
"6048 rport %p, DID x%06x not ready for "
"IO. State x%x, Type x%x\n",
@@ -2340,69 +2340,44 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
localport = vport->localport;
lport = (struct lpfc_nvme_lport *)localport->private;
- if (ndlp->nlp_type & (NLP_NVME_TARGET | NLP_NVME_INITIATOR)) {
-
- /* The driver isn't expecting the rport wwn to change
- * but it might get a different DID on a different
- * fabric.
+ /* NVME rports are not preserved across devloss.
+ * Just register this instance. Note, rpinfo->dev_loss_tmo
+ * is left 0 to indicate accept transport defaults. The
+ * driver communicates port role capabilities consistent
+ * with the PRLI response data.
+ */
+ memset(&rpinfo, 0, sizeof(struct nvme_fc_port_info));
+ rpinfo.port_id = ndlp->nlp_DID;
+ if (ndlp->nlp_type & NLP_NVME_TARGET)
+ rpinfo.port_role |= FC_PORT_ROLE_NVME_TARGET;
+ if (ndlp->nlp_type & NLP_NVME_INITIATOR)
+ rpinfo.port_role |= FC_PORT_ROLE_NVME_INITIATOR;
+
+ if (ndlp->nlp_type & NLP_NVME_DISCOVERY)
+ rpinfo.port_role |= FC_PORT_ROLE_NVME_DISCOVERY;
+
+ rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
+ rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
+ ret = nvme_fc_register_remoteport(localport, &rpinfo, &remote_port);
+ if (!ret) {
+ /* If the ndlp already has an nrport, this is just
+ * a resume of the existing rport. Else this is a
+ * new rport.
*/
- list_for_each_entry(rport, &lport->rport_list, list) {
- if (rport->remoteport->port_name !=
- wwn_to_u64(ndlp->nlp_portname.u.wwn))
- continue;
- lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NVME_DISC,
- "6035 lport %p, found matching rport "
- "at wwpn 0x%llx, Data: x%x x%x x%x "
- "x%06x\n",
- lport,
- rport->remoteport->port_name,
- rport->remoteport->port_id,
- rport->remoteport->port_role,
+ rport = remote_port->private;
+ if (ndlp->nrport == rport) {
+ lpfc_printf_vlog(ndlp->vport, KERN_INFO,
+ LOG_NVME_DISC,
+ "6014 Rebinding lport to "
+ "rport wwpn 0x%llx, "
+ "Data: x%x x%x x%x x%06x\n",
+ remote_port->port_name,
+ remote_port->port_id,
+ remote_port->port_role,
ndlp->nlp_type,
ndlp->nlp_DID);
- remote_port = rport->remoteport;
- if ((remote_port->port_id == 0) &&
- (remote_port->port_role ==
- FC_PORT_ROLE_NVME_DISCOVERY)) {
- remote_port->port_id = ndlp->nlp_DID;
- remote_port->port_role &=
- ~FC_PORT_ROLE_NVME_DISCOVERY;
- if (ndlp->nlp_type & NLP_NVME_TARGET)
- remote_port->port_role |=
- FC_PORT_ROLE_NVME_TARGET;
- if (ndlp->nlp_type & NLP_NVME_INITIATOR)
- remote_port->port_role |=
- FC_PORT_ROLE_NVME_INITIATOR;
-
- lpfc_printf_vlog(ndlp->vport, KERN_INFO,
- LOG_NVME_DISC,
- "6014 Rebinding lport to "
- "rport wwpn 0x%llx, "
- "Data: x%x x%x x%x x%06x\n",
- remote_port->port_name,
- remote_port->port_id,
- remote_port->port_role,
- ndlp->nlp_type,
- ndlp->nlp_DID);
- }
- return 0;
- }
-
- /* NVME rports are not preserved across devloss.
- * Just register this instance.
- */
- rpinfo.port_id = ndlp->nlp_DID;
- rpinfo.port_role = 0;
- if (ndlp->nlp_type & NLP_NVME_TARGET)
- rpinfo.port_role |= FC_PORT_ROLE_NVME_TARGET;
- if (ndlp->nlp_type & NLP_NVME_INITIATOR)
- rpinfo.port_role |= FC_PORT_ROLE_NVME_INITIATOR;
- rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
- rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
- ret = nvme_fc_register_remoteport(localport, &rpinfo,
- &remote_port);
- if (!ret) {
- rport = remote_port->private;
+ } else {
+ /* New rport. */
rport->remoteport = remote_port;
rport->lport = lport;
rport->ndlp = lpfc_nlp_get(ndlp);
@@ -2413,26 +2388,22 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
list_add_tail(&rport->list, &lport->rport_list);
lpfc_printf_vlog(vport, KERN_INFO,
LOG_NVME_DISC | LOG_NODE,
- "6022 Binding new rport to lport %p "
- "Rport WWNN 0x%llx, Rport WWPN 0x%llx "
- "DID x%06x Role x%x\n",
+ "6022 Binding new rport to "
+ "lport %p Rport WWNN 0x%llx, "
+ "Rport WWPN 0x%llx DID "
+ "x%06x Role x%x\n",
lport,
rpinfo.node_name, rpinfo.port_name,
rpinfo.port_id, rpinfo.port_role);
- } else {
- lpfc_printf_vlog(vport, KERN_ERR,
- LOG_NVME_DISC | LOG_NODE,
- "6031 RemotePort Registration failed "
- "err: %d, DID x%06x\n",
- ret, ndlp->nlp_DID);
}
} else {
- ret = -EINVAL;
- lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
- "6027 Unknown nlp_type x%x on DID x%06x "
- "ndlp %p. Not Registering nvme rport\n",
- ndlp->nlp_type, ndlp->nlp_DID, ndlp);
+ lpfc_printf_vlog(vport, KERN_ERR,
+ LOG_NVME_DISC | LOG_NODE,
+ "6031 RemotePort Registration failed "
+ "err: %d, DID x%06x\n",
+ ret, ndlp->nlp_DID);
}
+
return ret;
#else
return 0;
@@ -2460,7 +2431,6 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
struct lpfc_nvme_lport *lport;
struct lpfc_nvme_rport *rport;
struct nvme_fc_remote_port *remoteport;
- unsigned long wait_tmo;
localport = vport->localport;
@@ -2491,6 +2461,10 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
*/
if (ndlp->nlp_type & (NLP_NVME_TARGET | NLP_NVME_INITIATOR)) {
init_completion(&rport->rport_unreg_done);
+
+ /* No concern about the role change on the nvme remoteport.
+ * The transport will update it.
+ */
ret = nvme_fc_unregister_remoteport(remoteport);
if (ret != 0) {
lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
@@ -2499,17 +2473,6 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
ret, remoteport->port_state);
}
- /* Wait for the driver's delete completion routine to finish
- * before proceeding. This guarantees the transport and driver
- * have completed the unreg process.
- */
- wait_tmo = msecs_to_jiffies(5000);
- ret = wait_for_completion_timeout(&rport->rport_unreg_done,
- wait_tmo);
- if (ret == 0) {
- lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
- "6169 Unreg nvme wait timeout\n");
- }
}
return;
--
2.11.0
next prev parent reply other threads:[~2017-06-02 4:07 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-02 4:06 [PATCH v2 00/17] lpfc updates for 11.4.0.0 James Smart
2017-06-02 4:06 ` James Smart [this message]
2017-06-02 4:06 ` [PATCH v2 02/17] lpfc: Fix transition nvme-i rport handling to nport only James Smart
2017-06-02 4:06 ` [PATCH v2 03/17] lpfc: Fix nvme port role handling in sysfs and debugfs handlers James Smart
2017-06-02 4:06 ` [PATCH v2 04/17] lpfc: Add changes to assist in NVMET debugging James Smart
2017-06-02 4:06 ` [PATCH v2 05/17] lpfc: Fix Lun Priority level shown as NA James Smart
2017-06-02 4:07 ` [PATCH v2 06/17] lpfc: Fix nvmet node ref count handling James Smart
2017-06-02 4:07 ` [PATCH v2 07/17] lpfc: Fix Port going offline after multiple resets James Smart
2017-06-02 4:07 ` [PATCH v2 08/17] lpfc: Fix counters so outstandng NVME IO count is accurate James Smart
2017-06-02 4:07 ` [PATCH v2 09/17] lpfc: Fix return value of board_mode store routine in case of online failure James Smart
2017-06-02 4:07 ` [PATCH v2 10/17] lpfc: Fix crash on powering off BFS VM with passthrough device James Smart
2017-06-02 4:07 ` [PATCH v2 11/17] lpfc: Fix System panic after loading the driver James Smart
2017-06-02 4:07 ` [PATCH v2 12/17] lpfc: Null pointer dereference when log_verbose is set to 0xffffffff James Smart
2017-06-02 4:07 ` [PATCH v2 13/17] lpfc: Fix PRLI retry handling when target rejects it James Smart
2017-06-02 4:07 ` [PATCH v2 14/17] lpfc: Fix vports not logging into target James Smart
2017-06-02 4:07 ` [PATCH v2 15/17] lpfc: Fix defects reported by Coverity Scan James Smart
2017-06-02 4:07 ` [PATCH v2 16/17] lpfc: Add auto EQ delay logic James Smart
2017-06-02 4:07 ` [PATCH v2 17/17] lpfc: update to revision to 11.4.0.0 James Smart
2017-06-06 1:13 ` [PATCH v2 00/17] lpfc updates for 11.4.0.0 Martin K. Petersen
2017-06-07 4:20 ` James Smart
2017-06-13 3:04 ` Martin K. Petersen
2017-06-15 16:59 ` James Smart
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170602040711.21046-2-jsmart2021@gmail.com \
--to=jsmart2021@gmail.com \
--cc=dick.kennedy@broadcom.com \
--cc=james.smart@broadcom.com \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox