Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: James Smart <jsmart2021@gmail.com>
To: linux-nvme@lists.infradead.org
Cc: James Smart <jsmart2021@gmail.com>, Paul Ely <paul.ely@broadcom.com>
Subject: [PATCH v2 24/26] lpfc: nvme: Add Receive LS Request and Send LS Response support to nvme
Date: Tue, 31 Mar 2020 09:50:09 -0700	[thread overview]
Message-ID: <20200331165011.15819-25-jsmart2021@gmail.com> (raw)
In-Reply-To: <20200331165011.15819-1-jsmart2021@gmail.com>

Now that common helpers exist, add the ability to receive NVME LS requests
to the driver. New requests will be delivered to the transport by
nvme_fc_rcv_ls_req().

In order to complete the LS, add support for Send LS Response and send
LS response completion handling to the driver.

Signed-off-by: Paul Ely <paul.ely@broadcom.com>
Signed-off-by: James Smart <jsmart2021@gmail.com>

---
v2:
  Removed the xmt_ls_xxx and rcv_ls_req_xxx atomic stats that aren't
  meaningful to be tracked.
---
 drivers/scsi/lpfc/lpfc_nvme.c | 71 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index c6082c65d902..a868743ec3e3 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -400,6 +400,10 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
  * request. Any remaining validation is done and the LS is then forwarded
  * to the nvme-fc transport via nvme_fc_rcv_ls_req().
  *
+ * The calling sequence should be: nvme_fc_rcv_ls_req() -> (processing)
+ * -> lpfc_nvme_xmt_ls_rsp/cmp -> req->done.
+ * __lpfc_nvme_xmt_ls_rsp_cmp should free the allocated axchg.
+ *
  * Returns 0 if LS was handled and delivered to the transport
  * Returns 1 if LS failed to be handled and should be dropped
  */
@@ -407,6 +411,40 @@ int
 lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
 			struct lpfc_async_xchg_ctx *axchg)
 {
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	struct lpfc_vport *vport;
+	struct lpfc_nvme_rport *lpfc_rport;
+	struct nvme_fc_remote_port *remoteport;
+	struct lpfc_nvme_lport *lport;
+	uint32_t *payload = axchg->payload;
+	int rc;
+
+	vport = axchg->ndlp->vport;
+	lpfc_rport = axchg->ndlp->nrport;
+	if (!lpfc_rport)
+		return -EINVAL;
+
+	remoteport = lpfc_rport->remoteport;
+	if (!vport->localport)
+		return -EINVAL;
+
+	lport = vport->localport->private;
+	if (!lport)
+		return -EINVAL;
+
+	rc = nvme_fc_rcv_ls_req(remoteport, &axchg->ls_rsp, axchg->payload,
+				axchg->size);
+
+	lpfc_printf_log(phba, KERN_INFO, LOG_NVME_DISC,
+			"6205 NVME Unsol rcv: sz %d rc %d: %08x %08x %08x "
+			"%08x %08x %08x\n",
+			axchg->size, rc,
+			*payload, *(payload+1), *(payload+2),
+			*(payload+3), *(payload+4), *(payload+5));
+
+	if (!rc)
+		return 0;
+#endif
 	return 1;
 }
 
@@ -858,6 +896,37 @@ __lpfc_nvme_ls_abort(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
 	return 1;
 }
 
+static int
+lpfc_nvme_xmt_ls_rsp(struct nvme_fc_local_port *localport,
+		     struct nvme_fc_remote_port *remoteport,
+		     struct nvmefc_ls_rsp *ls_rsp)
+{
+	struct lpfc_async_xchg_ctx *axchg =
+		container_of(ls_rsp, struct lpfc_async_xchg_ctx, ls_rsp);
+	struct lpfc_nvme_lport *lport;
+	int rc;
+
+	if (axchg->phba->pport->load_flag & FC_UNLOADING)
+		return -ENODEV;
+
+	lport = (struct lpfc_nvme_lport *)localport->private;
+
+	rc = __lpfc_nvme_xmt_ls_rsp(axchg, ls_rsp, __lpfc_nvme_xmt_ls_rsp_cmp);
+
+	if (rc) {
+		/*
+		 * unless the failure is due to having already sent
+		 * the response, an abort will be generated for the
+		 * exchange if the rsp can't be sent.
+		 */
+		if (rc != -EALREADY)
+			atomic_inc(&lport->xmt_ls_abort);
+		return rc;
+	}
+
+	return 0;
+}
+
 /**
  * lpfc_nvme_ls_abort - Abort a prior NVME LS request
  * @lpfc_nvme_lport: Transport localport that LS is to be issued from.
@@ -2090,6 +2159,7 @@ static struct nvme_fc_port_template lpfc_nvme_template = {
 	.fcp_io       = lpfc_nvme_fcp_io_submit,
 	.ls_abort     = lpfc_nvme_ls_abort,
 	.fcp_abort    = lpfc_nvme_fcp_abort,
+	.xmt_ls_rsp   = lpfc_nvme_xmt_ls_rsp,
 
 	.max_hw_queues = 1,
 	.max_sgl_segments = LPFC_NVME_DEFAULT_SEGS,
@@ -2285,6 +2355,7 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
 		atomic_set(&lport->cmpl_fcp_err, 0);
 		atomic_set(&lport->cmpl_ls_xb, 0);
 		atomic_set(&lport->cmpl_ls_err, 0);
+
 		atomic_set(&lport->fc4NvmeLsRequests, 0);
 		atomic_set(&lport->fc4NvmeLsCmpls, 0);
 	}
-- 
2.16.4


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  parent reply	other threads:[~2020-03-31 17:07 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-31 16:49 [PATCH v2 00/26] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support James Smart
2020-03-31 16:49 ` [PATCH v2 01/26] nvme-fc: Sync header to FC-NVME-2 rev 1.08 James Smart
2020-03-31 16:49 ` [PATCH v2 02/26] nvme-fc and nvmet-fc: revise LLDD api for LS reception and LS request James Smart
2020-03-31 16:49 ` [PATCH v2 03/26] nvme-fc nvmet-fc: refactor for common LS definitions James Smart
2020-03-31 16:49 ` [PATCH v2 04/26] nvmet-fc: Better size LS buffers James Smart
2020-03-31 16:49 ` [PATCH v2 05/26] nvme-fc: Ensure private pointers are NULL if no data James Smart
2020-03-31 16:49 ` [PATCH v2 06/26] nvme-fc: convert assoc_active flag to bit op James Smart
2020-03-31 16:49 ` [PATCH v2 07/26] nvme-fc: Update header and host for common definitions for LS handling James Smart
2020-03-31 16:49 ` [PATCH v2 08/26] nvmet-fc: Update target " James Smart
2020-03-31 16:49 ` [PATCH v2 09/26] nvme-fc: Add Disconnect Association Rcv support James Smart
2020-03-31 16:49 ` [PATCH v2 10/26] nvmet-fc: add LS failure messages James Smart
2020-03-31 16:49 ` [PATCH v2 11/26] nvmet-fc: perform small cleanups on unneeded checks James Smart
2020-03-31 16:49 ` [PATCH v2 12/26] nvmet-fc: track hostport handle for associations James Smart
2020-03-31 16:49 ` [PATCH v2 13/26] nvmet-fc: rename ls_list to ls_rcv_list James Smart
2020-03-31 16:49 ` [PATCH v2 14/26] nvmet-fc: Add Disconnect Association Xmt support James Smart
2020-03-31 16:50 ` [PATCH v2 15/26] nvme-fcloop: refactor to enable target to host LS James Smart
2020-03-31 16:50 ` [PATCH v2 16/26] nvme-fcloop: add target to host LS request support James Smart
2020-03-31 16:50 ` [PATCH v2 17/26] lpfc: Refactor lpfc nvme headers James Smart
2020-03-31 16:50 ` [PATCH v2 18/26] lpfc: Refactor nvmet_rcv_ctx to create lpfc_async_xchg_ctx James Smart
2020-03-31 16:50 ` [PATCH v2 19/26] lpfc: Commonize lpfc_async_xchg_ctx state and flag definitions James Smart
2020-03-31 16:50 ` [PATCH v2 20/26] lpfc: Refactor NVME LS receive handling James Smart
2020-03-31 16:50 ` [PATCH v2 21/26] lpfc: Refactor Send LS Request support James Smart
2020-03-31 16:50 ` [PATCH v2 22/26] lpfc: Refactor Send LS Abort support James Smart
2020-03-31 16:50 ` [PATCH v2 23/26] lpfc: Refactor Send LS Response support James Smart
2020-03-31 16:50 ` James Smart [this message]
2020-03-31 16:50 ` [PATCH v2 25/26] lpfc: nvmet: Add support for NVME LS request hosthandle James Smart
2020-03-31 16:50 ` [PATCH v2 26/26] lpfc: nvmet: Add Send LS Request and Abort LS Request support James Smart
2020-04-01  8:27 ` [PATCH v2 00/26] nvme-fc/nvmet-fc: Add FC-NVME-2 disconnect association support Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200331165011.15819-25-jsmart2021@gmail.com \
    --to=jsmart2021@gmail.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=paul.ely@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox