linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/8] nvme-fc: FPIN link integrity handling
@ 2025-07-09 21:19 Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 1/8] fc_els: use 'union fc_tlv_desc' Bryan Gurney
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

FPIN LI (link integrity) messages are received when the attached
fabric detects hardware errors. In response to these messages I/O
should be directed away from the affected ports, and only used
if the 'optimized' paths are unavailable.
Upon port reset the paths should be put back in service as the
affected hardware might have been replaced.
This patch adds a new controller flag 'NVME_CTRL_MARGINAL'
which will be checked during multipath path selection, causing the
path to be skipped when checking for 'optimized' paths. If no
optimized paths are available the 'marginal' paths are considered
for path selection alongside the 'non-optimized' paths.
It also introduces a new nvme-fc callback 'nvme_fc_fpin_rcv()' to
evaluate the FPIN LI TLV payload and set the 'marginal' state on
all affected rports.

The testing for this patch set was performed by Bryan Gurney, using the
process outlined by John Meneghini's presentation at LSFMM 2024, where
the fibre channel switch sends an FPIN notification on a specific switch
port, and the following is checked on the initiator:

1. The controllers corresponding to the paths on the port that has
received the notification are showing a set NVME_CTRL_MARGINAL flag.

   \
    +- nvme4 fc traddr=c,host_traddr=e live optimized
    +- nvme5 fc traddr=8,host_traddr=e live non-optimized
    +- nvme8 fc traddr=e,host_traddr=f marginal optimized
    +- nvme9 fc traddr=a,host_traddr=f marginal non-optimized

2. The I/O statistics of the test namespace show no I/O activity on the
controllers with NVME_CTRL_MARGINAL set.

   Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s
   nvme4c4n1         0.00         0.00         0.00         0.00
   nvme4c5n1     25001.00         0.00        97.66         0.00
   nvme4c9n1     25000.00         0.00        97.66         0.00
   nvme4n1       50011.00         0.00       195.36         0.00


   Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s
   nvme4c4n1         0.00         0.00         0.00         0.00
   nvme4c5n1     48360.00         0.00       188.91         0.00
   nvme4c9n1      1642.00         0.00         6.41         0.00
   nvme4n1       49981.00         0.00       195.24         0.00


   Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s
   nvme4c4n1         0.00         0.00         0.00         0.00
   nvme4c5n1     50001.00         0.00       195.32         0.00
   nvme4c9n1         0.00         0.00         0.00         0.00
   nvme4n1       50016.00         0.00       195.38         0.00

Link: https://people.redhat.com/jmeneghi/LSFMM_2024/LSFMM_2024_NVMe_Cancel_and_FPIN.pdf

More rigorous testing was also performed to ensure proper path migration
on each of the eight different FPIN link integrity events, particularly
during a scenario where there are only non-optimized paths available, in
a state where all paths are marginal.  On a configuration with a
round-robin iopolicy, when all paths on the host show as marginal, I/O
continues on the optimized path that was most recently non-marginal.
From this point, of both of the optimized paths are down, I/O properly
continues on the remaining paths.

The testing so far has been done with an Emulex host bus adapter using
lpfc.  When tested on a QLogic host bus adapter, a warning was found
when the first FPIN link integrity event was received by the host:

  kernel: memcpy: detected field-spanning write (size 60) of single field
  "((uint8_t *)fpin_pkt + buffer_copy_offset)"
  at drivers/scsi/qla2xxx/qla_isr.c:1221 (size 44)

Line 1221 of qla_isr.c is in the function qla27xx_copy_fpin_pkt().


Changes to the original submission:
- Changed flag name to 'marginal'
- Do not block marginal path; influence path selection instead
  to de-prioritize marginal paths

Changes to v2:
- Split off driver-specific modifications
- Introduce 'union fc_tlv_desc' to avoid casts

Changes to v3:
- Include reviews from Justin Tee
- Split marginal path handling patch

Changes to v4:
- Change 'u8' to '__u8' on fc_tlv_desc to fix a failure to build
- Print 'marginal' instead of 'live' in the state of controllers
  when they are marginal

Changes to v5:
- Minor spelling corrections to patch descriptions

Changes to v6:
- No code changes; added note about additional testing

Changes to v7:
- Split nvme core marginal flag addition into its own patch
- Add patch for queue_depth marginal path support

Bryan Gurney (2):
  nvme: add NVME_CTRL_MARGINAL flag
  nvme: sysfs: emit the marginal path state in show_state()

Hannes Reinecke (5):
  fc_els: use 'union fc_tlv_desc'
  nvme-fc: marginal path handling
  nvme-fc: nvme_fc_fpin_rcv() callback
  lpfc: enable FPIN notification for NVMe
  qla2xxx: enable FPIN notification for NVMe

John Meneghini (1):
  nvme-multipath: queue-depth support for marginal paths

 drivers/nvme/host/core.c         |   1 +
 drivers/nvme/host/fc.c           |  99 +++++++++++++++++++
 drivers/nvme/host/multipath.c    |  24 +++--
 drivers/nvme/host/nvme.h         |   6 ++
 drivers/nvme/host/sysfs.c        |   4 +-
 drivers/scsi/lpfc/lpfc_els.c     |  84 ++++++++--------
 drivers/scsi/qla2xxx/qla_isr.c   |   3 +
 drivers/scsi/scsi_transport_fc.c |  27 +++--
 include/linux/nvme-fc-driver.h   |   3 +
 include/uapi/scsi/fc/fc_els.h    | 165 +++++++++++++++++--------------
 10 files changed, 275 insertions(+), 141 deletions(-)

-- 
2.50.0



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v8 1/8] fc_els: use 'union fc_tlv_desc'
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 2/8] nvme: add NVME_CTRL_MARGINAL flag Bryan Gurney
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

From: Hannes Reinecke <hare@kernel.org>

Introduce 'union fc_tlv_desc' to have a common structure for all FC
ELS TLV structures and avoid type casts.

[bgurney: The cast inside the union fc_tlv_next_desc() has "u8",
which causes a failure to build.  Use "__u8" instead.]

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Justin Tee <justin.tee@broadcom.com>
Tested-by: Bryan Gurney <bgurney@redhat.com>
Reviewed-by: John Meneghini <jmeneghi@redhat.com>
Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_els.c     |  75 +++++++-------
 drivers/scsi/scsi_transport_fc.c |  27 +++--
 include/uapi/scsi/fc/fc_els.h    | 165 +++++++++++++++++--------------
 3 files changed, 135 insertions(+), 132 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index b1a61eca8295..c7cbc5b50dfe 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -3937,7 +3937,7 @@ lpfc_cmpl_els_edc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 {
 	IOCB_t *irsp_iocb;
 	struct fc_els_edc_resp *edc_rsp;
-	struct fc_tlv_desc *tlv;
+	union fc_tlv_desc *tlv;
 	struct fc_diag_cg_sig_desc *pcgd;
 	struct fc_diag_lnkflt_desc *plnkflt;
 	struct lpfc_dmabuf *pcmd, *prsp;
@@ -4028,7 +4028,7 @@ lpfc_cmpl_els_edc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 			goto out;
 		}
 
-		dtag = be32_to_cpu(tlv->desc_tag);
+		dtag = be32_to_cpu(tlv->hdr.desc_tag);
 		switch (dtag) {
 		case ELS_DTAG_LNK_FAULT_CAP:
 			if (bytes_remain < FC_TLV_DESC_SZ_FROM_LENGTH(tlv) ||
@@ -4043,7 +4043,7 @@ lpfc_cmpl_els_edc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 					sizeof(struct fc_diag_lnkflt_desc));
 				goto out;
 			}
-			plnkflt = (struct fc_diag_lnkflt_desc *)tlv;
+			plnkflt = &tlv->lnkflt;
 			lpfc_printf_log(phba, KERN_INFO,
 				LOG_ELS | LOG_LDS_EVENT,
 				"4617 Link Fault Desc Data: 0x%08x 0x%08x "
@@ -4070,7 +4070,7 @@ lpfc_cmpl_els_edc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 				goto out;
 			}
 
-			pcgd = (struct fc_diag_cg_sig_desc *)tlv;
+			pcgd = &tlv->cg_sig;
 			lpfc_printf_log(
 				phba, KERN_INFO, LOG_ELS | LOG_CGN_MGMT,
 				"4616 CGN Desc Data: 0x%08x 0x%08x "
@@ -4125,10 +4125,8 @@ lpfc_cmpl_els_edc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
 }
 
 static void
-lpfc_format_edc_lft_desc(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+lpfc_format_edc_lft_desc(struct lpfc_hba *phba, struct fc_diag_lnkflt_desc *lft)
 {
-	struct fc_diag_lnkflt_desc *lft = (struct fc_diag_lnkflt_desc *)tlv;
-
 	lft->desc_tag = cpu_to_be32(ELS_DTAG_LNK_FAULT_CAP);
 	lft->desc_len = cpu_to_be32(
 		FC_TLV_DESC_LENGTH_FROM_SZ(struct fc_diag_lnkflt_desc));
@@ -4141,10 +4139,8 @@ lpfc_format_edc_lft_desc(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
 }
 
 static void
-lpfc_format_edc_cgn_desc(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+lpfc_format_edc_cgn_desc(struct lpfc_hba *phba, struct fc_diag_cg_sig_desc *cgd)
 {
-	struct fc_diag_cg_sig_desc *cgd = (struct fc_diag_cg_sig_desc *)tlv;
-
 	/* We are assuming cgd was zero'ed before calling this routine */
 
 	/* Configure the congestion detection capability */
@@ -4233,7 +4229,7 @@ lpfc_issue_els_edc(struct lpfc_vport *vport, uint8_t retry)
 	struct lpfc_hba  *phba = vport->phba;
 	struct lpfc_iocbq *elsiocb;
 	struct fc_els_edc *edc_req;
-	struct fc_tlv_desc *tlv;
+	union fc_tlv_desc *tlv;
 	u16 cmdsize;
 	struct lpfc_nodelist *ndlp;
 	u8 *pcmd = NULL;
@@ -4272,13 +4268,13 @@ lpfc_issue_els_edc(struct lpfc_vport *vport, uint8_t retry)
 	tlv = edc_req->desc;
 
 	if (cgn_desc_size) {
-		lpfc_format_edc_cgn_desc(phba, tlv);
+		lpfc_format_edc_cgn_desc(phba, &tlv->cg_sig);
 		phba->cgn_sig_freq = lpfc_fabric_cgn_frequency;
 		tlv = fc_tlv_next_desc(tlv);
 	}
 
 	if (lft_desc_size)
-		lpfc_format_edc_lft_desc(phba, tlv);
+		lpfc_format_edc_lft_desc(phba, &tlv->lnkflt);
 
 	lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS | LOG_CGN_MGMT,
 			 "4623 Xmit EDC to remote "
@@ -5824,7 +5820,7 @@ lpfc_issue_els_edc_rsp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
 {
 	struct lpfc_hba  *phba = vport->phba;
 	struct fc_els_edc_resp *edc_rsp;
-	struct fc_tlv_desc *tlv;
+	union fc_tlv_desc *tlv;
 	struct lpfc_iocbq *elsiocb;
 	IOCB_t *icmd, *cmd;
 	union lpfc_wqe128 *wqe;
@@ -5868,10 +5864,10 @@ lpfc_issue_els_edc_rsp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
 		FC_TLV_DESC_LENGTH_FROM_SZ(struct fc_els_lsri_desc));
 	edc_rsp->lsri.rqst_w0.cmd = ELS_EDC;
 	tlv = edc_rsp->desc;
-	lpfc_format_edc_cgn_desc(phba, tlv);
+	lpfc_format_edc_cgn_desc(phba, &tlv->cg_sig);
 	tlv = fc_tlv_next_desc(tlv);
 	if (lft_desc_size)
-		lpfc_format_edc_lft_desc(phba, tlv);
+		lpfc_format_edc_lft_desc(phba, &tlv->lnkflt);
 
 	lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP,
 			      "Issue EDC ACC:      did:x%x flg:x%lx refcnt %d",
@@ -9256,7 +9252,7 @@ lpfc_els_rcv_edc(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
 {
 	struct lpfc_hba  *phba = vport->phba;
 	struct fc_els_edc *edc_req;
-	struct fc_tlv_desc *tlv;
+	union fc_tlv_desc *tlv;
 	uint8_t *payload;
 	uint32_t *ptr, dtag;
 	const char *dtag_nm;
@@ -9299,7 +9295,7 @@ lpfc_els_rcv_edc(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
 			goto out;
 		}
 
-		dtag = be32_to_cpu(tlv->desc_tag);
+		dtag = be32_to_cpu(tlv->hdr.desc_tag);
 		switch (dtag) {
 		case ELS_DTAG_LNK_FAULT_CAP:
 			if (bytes_remain < FC_TLV_DESC_SZ_FROM_LENGTH(tlv) ||
@@ -9314,7 +9310,7 @@ lpfc_els_rcv_edc(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
 					sizeof(struct fc_diag_lnkflt_desc));
 				goto out;
 			}
-			plnkflt = (struct fc_diag_lnkflt_desc *)tlv;
+			plnkflt = &tlv->lnkflt;
 			lpfc_printf_log(phba, KERN_INFO,
 				LOG_ELS | LOG_LDS_EVENT,
 				"4626 Link Fault Desc Data: x%08x len x%x "
@@ -9351,7 +9347,7 @@ lpfc_els_rcv_edc(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
 			phba->cgn_sig_freq = lpfc_fabric_cgn_frequency;
 
 			lpfc_least_capable_settings(
-				phba, (struct fc_diag_cg_sig_desc *)tlv);
+				phba, &tlv->cg_sig);
 			break;
 		default:
 			dtag_nm = lpfc_get_tlv_dtag_nm(dtag);
@@ -9942,14 +9938,13 @@ lpfc_display_fpin_wwpn(struct lpfc_hba *phba, __be64 *wwnlist, u32 cnt)
 /**
  * lpfc_els_rcv_fpin_li - Process an FPIN Link Integrity Event.
  * @phba: Pointer to phba object.
- * @tlv:  Pointer to the Link Integrity Notification Descriptor.
+ * @li:  Pointer to the Link Integrity Notification Descriptor.
  *
  * This function processes a Link Integrity FPIN event by logging a message.
  **/
 static void
-lpfc_els_rcv_fpin_li(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+lpfc_els_rcv_fpin_li(struct lpfc_hba *phba, struct fc_fn_li_desc *li)
 {
-	struct fc_fn_li_desc *li = (struct fc_fn_li_desc *)tlv;
 	const char *li_evt_str;
 	u32 li_evt, cnt;
 
@@ -9973,14 +9968,13 @@ lpfc_els_rcv_fpin_li(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
 /**
  * lpfc_els_rcv_fpin_del - Process an FPIN Delivery Event.
  * @phba: Pointer to hba object.
- * @tlv:  Pointer to the Delivery Notification Descriptor TLV
+ * @del:  Pointer to the Delivery Notification Descriptor TLV
  *
  * This function processes a Delivery FPIN event by logging a message.
  **/
 static void
-lpfc_els_rcv_fpin_del(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+lpfc_els_rcv_fpin_del(struct lpfc_hba *phba, struct fc_fn_deli_desc *del)
 {
-	struct fc_fn_deli_desc *del = (struct fc_fn_deli_desc *)tlv;
 	const char *del_rsn_str;
 	u32 del_rsn;
 	__be32 *frame;
@@ -10011,14 +10005,14 @@ lpfc_els_rcv_fpin_del(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
 /**
  * lpfc_els_rcv_fpin_peer_cgn - Process a FPIN Peer Congestion Event.
  * @phba: Pointer to hba object.
- * @tlv:  Pointer to the Peer Congestion Notification Descriptor TLV
+ * @pc:  Pointer to the Peer Congestion Notification Descriptor TLV
  *
  * This function processes a Peer Congestion FPIN event by logging a message.
  **/
 static void
-lpfc_els_rcv_fpin_peer_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+lpfc_els_rcv_fpin_peer_cgn(struct lpfc_hba *phba,
+			   struct fc_fn_peer_congn_desc *pc)
 {
-	struct fc_fn_peer_congn_desc *pc = (struct fc_fn_peer_congn_desc *)tlv;
 	const char *pc_evt_str;
 	u32 pc_evt, cnt;
 
@@ -10046,7 +10040,7 @@ lpfc_els_rcv_fpin_peer_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
 /**
  * lpfc_els_rcv_fpin_cgn - Process an FPIN Congestion notification
  * @phba: Pointer to hba object.
- * @tlv:  Pointer to the Congestion Notification Descriptor TLV
+ * @cgn:  Pointer to the Congestion Notification Descriptor TLV
  *
  * This function processes an FPIN Congestion Notifiction.  The notification
  * could be an Alarm or Warning.  This routine feeds that data into driver's
@@ -10055,10 +10049,9 @@ lpfc_els_rcv_fpin_peer_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
  * to the upper layer or 0 to indicate don't deliver it.
  **/
 static int
-lpfc_els_rcv_fpin_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+lpfc_els_rcv_fpin_cgn(struct lpfc_hba *phba, struct fc_fn_congn_desc *cgn)
 {
 	struct lpfc_cgn_info *cp;
-	struct fc_fn_congn_desc *cgn = (struct fc_fn_congn_desc *)tlv;
 	const char *cgn_evt_str;
 	u32 cgn_evt;
 	const char *cgn_sev_str;
@@ -10161,7 +10154,7 @@ lpfc_els_rcv_fpin(struct lpfc_vport *vport, void *p, u32 fpin_length)
 {
 	struct lpfc_hba *phba = vport->phba;
 	struct fc_els_fpin *fpin = (struct fc_els_fpin *)p;
-	struct fc_tlv_desc *tlv, *first_tlv, *current_tlv;
+	union fc_tlv_desc *tlv, *first_tlv, *current_tlv;
 	const char *dtag_nm;
 	int desc_cnt = 0, bytes_remain, cnt;
 	u32 dtag, deliver = 0;
@@ -10186,7 +10179,7 @@ lpfc_els_rcv_fpin(struct lpfc_vport *vport, void *p, u32 fpin_length)
 		return;
 	}
 
-	tlv = (struct fc_tlv_desc *)&fpin->fpin_desc[0];
+	tlv = &fpin->fpin_desc[0];
 	first_tlv = tlv;
 	bytes_remain = fpin_length - offsetof(struct fc_els_fpin, fpin_desc);
 	bytes_remain = min_t(u32, bytes_remain, be32_to_cpu(fpin->desc_len));
@@ -10194,22 +10187,22 @@ lpfc_els_rcv_fpin(struct lpfc_vport *vport, void *p, u32 fpin_length)
 	/* process each descriptor separately */
 	while (bytes_remain >= FC_TLV_DESC_HDR_SZ &&
 	       bytes_remain >= FC_TLV_DESC_SZ_FROM_LENGTH(tlv)) {
-		dtag = be32_to_cpu(tlv->desc_tag);
+		dtag = be32_to_cpu(tlv->hdr.desc_tag);
 		switch (dtag) {
 		case ELS_DTAG_LNK_INTEGRITY:
-			lpfc_els_rcv_fpin_li(phba, tlv);
+			lpfc_els_rcv_fpin_li(phba, &tlv->li);
 			deliver = 1;
 			break;
 		case ELS_DTAG_DELIVERY:
-			lpfc_els_rcv_fpin_del(phba, tlv);
+			lpfc_els_rcv_fpin_del(phba, &tlv->deli);
 			deliver = 1;
 			break;
 		case ELS_DTAG_PEER_CONGEST:
-			lpfc_els_rcv_fpin_peer_cgn(phba, tlv);
+			lpfc_els_rcv_fpin_peer_cgn(phba, &tlv->peer_congn);
 			deliver = 1;
 			break;
 		case ELS_DTAG_CONGESTION:
-			deliver = lpfc_els_rcv_fpin_cgn(phba, tlv);
+			deliver = lpfc_els_rcv_fpin_cgn(phba, &tlv->congn);
 			break;
 		default:
 			dtag_nm = lpfc_get_tlv_dtag_nm(dtag);
@@ -10222,12 +10215,12 @@ lpfc_els_rcv_fpin(struct lpfc_vport *vport, void *p, u32 fpin_length)
 			return;
 		}
 		lpfc_cgn_update_stat(phba, dtag);
-		cnt = be32_to_cpu(tlv->desc_len);
+		cnt = be32_to_cpu(tlv->hdr.desc_len);
 
 		/* Sanity check descriptor length. The desc_len value does not
 		 * include space for the desc_tag and the desc_len fields.
 		 */
-		len -= (cnt + sizeof(struct fc_tlv_desc));
+		len -= (cnt + sizeof(struct fc_tlv_desc_hdr));
 		if (len < 0) {
 			dtag_nm = lpfc_get_tlv_dtag_nm(dtag);
 			lpfc_printf_log(phba, KERN_WARNING, LOG_CGN_MGMT,
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
index 6b165a3ec6de..4462f2f7b102 100644
--- a/drivers/scsi/scsi_transport_fc.c
+++ b/drivers/scsi/scsi_transport_fc.c
@@ -750,13 +750,12 @@ fc_cn_stats_update(u16 event_type, struct fc_fpin_stats *stats)
  *
  */
 static void
-fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_fn_li_desc *li_desc)
 {
 	u8 i;
 	struct fc_rport *rport = NULL;
 	struct fc_rport *attach_rport = NULL;
 	struct fc_host_attrs *fc_host = shost_to_fc_host(shost);
-	struct fc_fn_li_desc *li_desc = (struct fc_fn_li_desc *)tlv;
 	u16 event_type = be16_to_cpu(li_desc->event_type);
 	u64 wwpn;
 
@@ -799,12 +798,11 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
  */
 static void
 fc_fpin_delivery_stats_update(struct Scsi_Host *shost,
-			      struct fc_tlv_desc *tlv)
+			      struct fc_fn_deli_desc *dn_desc)
 {
 	struct fc_rport *rport = NULL;
 	struct fc_rport *attach_rport = NULL;
 	struct fc_host_attrs *fc_host = shost_to_fc_host(shost);
-	struct fc_fn_deli_desc *dn_desc = (struct fc_fn_deli_desc *)tlv;
 	u32 reason_code = be32_to_cpu(dn_desc->deli_reason_code);
 
 	rport = fc_find_rport_by_wwpn(shost,
@@ -830,13 +828,11 @@ fc_fpin_delivery_stats_update(struct Scsi_Host *shost,
  */
 static void
 fc_fpin_peer_congn_stats_update(struct Scsi_Host *shost,
-				struct fc_tlv_desc *tlv)
+				struct fc_fn_peer_congn_desc *pc_desc)
 {
 	u8 i;
 	struct fc_rport *rport = NULL;
 	struct fc_rport *attach_rport = NULL;
-	struct fc_fn_peer_congn_desc *pc_desc =
-	    (struct fc_fn_peer_congn_desc *)tlv;
 	u16 event_type = be16_to_cpu(pc_desc->event_type);
 	u64 wwpn;
 
@@ -876,10 +872,9 @@ fc_fpin_peer_congn_stats_update(struct Scsi_Host *shost,
  */
 static void
 fc_fpin_congn_stats_update(struct Scsi_Host *shost,
-			   struct fc_tlv_desc *tlv)
+			   struct fc_fn_congn_desc *congn)
 {
 	struct fc_host_attrs *fc_host = shost_to_fc_host(shost);
-	struct fc_fn_congn_desc *congn = (struct fc_fn_congn_desc *)tlv;
 
 	fc_cn_stats_update(be16_to_cpu(congn->event_type),
 			   &fc_host->fpin_stats);
@@ -899,32 +894,32 @@ fc_host_fpin_rcv(struct Scsi_Host *shost, u32 fpin_len, char *fpin_buf,
 		u8 event_acknowledge)
 {
 	struct fc_els_fpin *fpin = (struct fc_els_fpin *)fpin_buf;
-	struct fc_tlv_desc *tlv;
+	union fc_tlv_desc *tlv;
 	u32 bytes_remain;
 	u32 dtag;
 	enum fc_host_event_code event_code =
 		event_acknowledge ? FCH_EVT_LINK_FPIN_ACK : FCH_EVT_LINK_FPIN;
 
 	/* Update Statistics */
-	tlv = (struct fc_tlv_desc *)&fpin->fpin_desc[0];
+	tlv = &fpin->fpin_desc[0];
 	bytes_remain = fpin_len - offsetof(struct fc_els_fpin, fpin_desc);
 	bytes_remain = min_t(u32, bytes_remain, be32_to_cpu(fpin->desc_len));
 
 	while (bytes_remain >= FC_TLV_DESC_HDR_SZ &&
 	       bytes_remain >= FC_TLV_DESC_SZ_FROM_LENGTH(tlv)) {
-		dtag = be32_to_cpu(tlv->desc_tag);
+		dtag = be32_to_cpu(tlv->hdr.desc_tag);
 		switch (dtag) {
 		case ELS_DTAG_LNK_INTEGRITY:
-			fc_fpin_li_stats_update(shost, tlv);
+			fc_fpin_li_stats_update(shost, &tlv->li);
 			break;
 		case ELS_DTAG_DELIVERY:
-			fc_fpin_delivery_stats_update(shost, tlv);
+			fc_fpin_delivery_stats_update(shost, &tlv->deli);
 			break;
 		case ELS_DTAG_PEER_CONGEST:
-			fc_fpin_peer_congn_stats_update(shost, tlv);
+			fc_fpin_peer_congn_stats_update(shost, &tlv->peer_congn);
 			break;
 		case ELS_DTAG_CONGESTION:
-			fc_fpin_congn_stats_update(shost, tlv);
+			fc_fpin_congn_stats_update(shost, &tlv->congn);
 		}
 
 		bytes_remain -= FC_TLV_DESC_SZ_FROM_LENGTH(tlv);
diff --git a/include/uapi/scsi/fc/fc_els.h b/include/uapi/scsi/fc/fc_els.h
index 16782c360de3..3598dc553f4d 100644
--- a/include/uapi/scsi/fc/fc_els.h
+++ b/include/uapi/scsi/fc/fc_els.h
@@ -253,12 +253,12 @@ enum fc_ls_tlv_dtag {
 
 
 /*
- * Generic Link Service TLV Descriptor format
+ * Generic Link Service TLV Descriptor header
  *
  * This structure, as it defines no payload, will also be referred to
  * as the "tlv header" - which contains the tag and len fields.
  */
-struct fc_tlv_desc {
+struct fc_tlv_desc_hdr {
 	__be32		desc_tag;	/* Notification Descriptor Tag */
 	__be32		desc_len;	/* Length of Descriptor (in bytes).
 					 * Size of descriptor excluding
@@ -267,36 +267,6 @@ struct fc_tlv_desc {
 	__u8		desc_value[];  /* Descriptor Value */
 };
 
-/* Descriptor tag and len fields are considered the mandatory header
- * for a descriptor
- */
-#define FC_TLV_DESC_HDR_SZ	sizeof(struct fc_tlv_desc)
-
-/*
- * Macro, used when initializing payloads, to return the descriptor length.
- * Length is size of descriptor minus the tag and len fields.
- */
-#define FC_TLV_DESC_LENGTH_FROM_SZ(desc)	\
-		(sizeof(desc) - FC_TLV_DESC_HDR_SZ)
-
-/* Macro, used on received payloads, to return the descriptor length */
-#define FC_TLV_DESC_SZ_FROM_LENGTH(tlv)		\
-		(__be32_to_cpu((tlv)->desc_len) + FC_TLV_DESC_HDR_SZ)
-
-/*
- * This helper is used to walk descriptors in a descriptor list.
- * Given the address of the current descriptor, which minimally contains a
- * tag and len field, calculate the address of the next descriptor based
- * on the len field.
- */
-static inline void *fc_tlv_next_desc(void *desc)
-{
-	struct fc_tlv_desc *tlv = desc;
-
-	return (desc + FC_TLV_DESC_SZ_FROM_LENGTH(tlv));
-}
-
-
 /*
  * Link Service Request Information Descriptor
  */
@@ -1094,19 +1064,6 @@ struct fc_fn_congn_desc {
 	__u8		resv[3];	/* reserved - must be zero */
 };
 
-/*
- * ELS_FPIN - Fabric Performance Impact Notification
- */
-struct fc_els_fpin {
-	__u8		fpin_cmd;	/* command (0x16) */
-	__u8		fpin_zero[3];	/* specified as zero - part of cmd */
-	__be32		desc_len;	/* Length of Descriptor List (in bytes).
-					 * Size of ELS excluding fpin_cmd,
-					 * fpin_zero and desc_len fields.
-					 */
-	struct fc_tlv_desc	fpin_desc[];	/* Descriptor list */
-};
-
 /* Diagnostic Function Descriptor - FPIN Registration */
 struct fc_df_desc_fpin_reg {
 	__be32		desc_tag;	/* FPIN Registration (0x00030001) */
@@ -1125,33 +1082,6 @@ struct fc_df_desc_fpin_reg {
 					 */
 };
 
-/*
- * ELS_RDF - Register Diagnostic Functions
- */
-struct fc_els_rdf {
-	__u8		fpin_cmd;	/* command (0x19) */
-	__u8		fpin_zero[3];	/* specified as zero - part of cmd */
-	__be32		desc_len;	/* Length of Descriptor List (in bytes).
-					 * Size of ELS excluding fpin_cmd,
-					 * fpin_zero and desc_len fields.
-					 */
-	struct fc_tlv_desc	desc[];	/* Descriptor list */
-};
-
-/*
- * ELS RDF LS_ACC Response.
- */
-struct fc_els_rdf_resp {
-	struct fc_els_ls_acc	acc_hdr;
-	__be32			desc_list_len;	/* Length of response (in
-						 * bytes). Excludes acc_hdr
-						 * and desc_list_len fields.
-						 */
-	struct fc_els_lsri_desc	lsri;
-	struct fc_tlv_desc	desc[];	/* Supported Descriptor list */
-};
-
-
 /*
  * Diagnostic Capability Descriptors for EDC ELS
  */
@@ -1221,6 +1151,65 @@ struct fc_diag_cg_sig_desc {
 	struct fc_diag_cg_sig_freq	rcv_signal_frequency;
 };
 
+/*
+ * Generic Link Service TLV Descriptor format
+ *
+ * This structure, as it defines no payload, will also be referred to
+ * as the "tlv header" - which contains the tag and len fields.
+ */
+union fc_tlv_desc {
+	struct fc_tlv_desc_hdr hdr;
+	struct fc_els_lsri_desc lsri;
+	struct fc_fn_li_desc li;
+	struct fc_fn_deli_desc deli;
+	struct fc_fn_peer_congn_desc peer_congn;
+	struct fc_fn_congn_desc congn;
+	struct fc_df_desc_fpin_reg fpin_reg;
+	struct fc_diag_lnkflt_desc lnkflt;
+	struct fc_diag_cg_sig_desc cg_sig;
+};
+
+/* Descriptor tag and len fields are considered the mandatory header
+ * for a descriptor
+ */
+#define FC_TLV_DESC_HDR_SZ	sizeof(struct fc_tlv_desc_hdr)
+
+/*
+ * Macro, used when initializing payloads, to return the descriptor length.
+ * Length is size of descriptor minus the tag and len fields.
+ */
+#define FC_TLV_DESC_LENGTH_FROM_SZ(desc)	\
+		(sizeof(desc) - FC_TLV_DESC_HDR_SZ)
+
+/* Macro, used on received payloads, to return the descriptor length */
+#define FC_TLV_DESC_SZ_FROM_LENGTH(tlv)		\
+		(__be32_to_cpu((tlv)->hdr.desc_len) + FC_TLV_DESC_HDR_SZ)
+
+/*
+ * This helper is used to walk descriptors in a descriptor list.
+ * Given the address of the current descriptor, which minimally contains a
+ * tag and len field, calculate the address of the next descriptor based
+ * on the len field.
+ */
+static inline union fc_tlv_desc *fc_tlv_next_desc(union fc_tlv_desc *desc)
+{
+	return (union fc_tlv_desc *)((__u8 *)desc + FC_TLV_DESC_SZ_FROM_LENGTH(desc));
+}
+
+
+/*
+ * ELS_FPIN - Fabric Performance Impact Notification
+ */
+struct fc_els_fpin {
+	__u8		fpin_cmd;	/* command (0x16) */
+	__u8		fpin_zero[3];	/* specified as zero - part of cmd */
+	__be32		desc_len;	/* Length of Descriptor List (in bytes).
+					 * Size of ELS excluding fpin_cmd,
+					 * fpin_zero and desc_len fields.
+					 */
+	union fc_tlv_desc	fpin_desc[];	/* Descriptor list */
+};
+
 /*
  * ELS_EDC - Exchange Diagnostic Capabilities
  */
@@ -1231,10 +1220,37 @@ struct fc_els_edc {
 					 * Size of ELS excluding edc_cmd,
 					 * edc_zero and desc_len fields.
 					 */
-	struct fc_tlv_desc	desc[];
+	union fc_tlv_desc	desc[];
 					/* Diagnostic Descriptor list */
 };
 
+/*
+ * ELS_RDF - Register Diagnostic Functions
+ */
+struct fc_els_rdf {
+	__u8		fpin_cmd;	/* command (0x19) */
+	__u8		fpin_zero[3];	/* specified as zero - part of cmd */
+	__be32		desc_len;	/* Length of Descriptor List (in bytes).
+					 * Size of ELS excluding fpin_cmd,
+					 * fpin_zero and desc_len fields.
+					 */
+	union fc_tlv_desc	desc[];	/* Descriptor list */
+};
+
+/*
+ * ELS RDF LS_ACC Response.
+ */
+struct fc_els_rdf_resp {
+	struct fc_els_ls_acc	acc_hdr;
+	__be32			desc_list_len;	/* Length of response (in
+						 * bytes). Excludes acc_hdr
+						 * and desc_list_len fields.
+						 */
+	struct fc_els_lsri_desc	lsri;
+	union fc_tlv_desc	desc[];	/* Supported Descriptor list */
+};
+
+
 /*
  * ELS EDC LS_ACC Response.
  */
@@ -1245,9 +1261,8 @@ struct fc_els_edc_resp {
 						 * and desc_list_len fields.
 						 */
 	struct fc_els_lsri_desc	lsri;
-	struct fc_tlv_desc	desc[];
+	union fc_tlv_desc	desc[];
 				    /* Supported Diagnostic Descriptor list */
 };
 
-
 #endif /* _FC_ELS_H_ */
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v8 2/8] nvme: add NVME_CTRL_MARGINAL flag
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 1/8] fc_els: use 'union fc_tlv_desc' Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 3/8] nvme-fc: marginal path handling Bryan Gurney
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

Add a new controller flag, NVME_CTRL_MARGINAL, to help multipath I/O
policies to react to a path that is set to a "marginal" state.

The flag is cleared on controller reset, which is often the case when
faulty cabling or transceiver hardware is replaced.

Signed-off-by: Bryan Gurney <bgurney@redhat.com>
---
 drivers/nvme/host/core.c | 1 +
 drivers/nvme/host/nvme.h | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5d8638086cba..c4ae4041c055 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -5039,6 +5039,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
 	WRITE_ONCE(ctrl->state, NVME_CTRL_NEW);
 	ctrl->passthru_err_log_enabled = false;
 	clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
+	clear_bit(NVME_CTRL_MARGINAL, &ctrl->flags);
 	spin_lock_init(&ctrl->lock);
 	mutex_init(&ctrl->namespaces_lock);
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index cfd2b5b90b91..d71e6668f11c 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -275,6 +275,7 @@ enum nvme_ctrl_flags {
 	NVME_CTRL_SKIP_ID_CNS_CS	= 4,
 	NVME_CTRL_DIRTY_CAPABILITY	= 5,
 	NVME_CTRL_FROZEN		= 6,
+	NVME_CTRL_MARGINAL		= 7,
 };
 
 struct nvme_ctrl {
@@ -417,6 +418,11 @@ static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)
 	return READ_ONCE(ctrl->state);
 }
 
+static inline bool nvme_ctrl_is_marginal(struct nvme_ctrl *ctrl)
+{
+	return test_bit(NVME_CTRL_MARGINAL, &ctrl->flags);
+}
+
 enum nvme_iopolicy {
 	NVME_IOPOLICY_NUMA,
 	NVME_IOPOLICY_RR,
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v8 3/8] nvme-fc: marginal path handling
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 1/8] fc_els: use 'union fc_tlv_desc' Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 2/8] nvme: add NVME_CTRL_MARGINAL flag Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 4/8] nvme-fc: nvme_fc_fpin_rcv() callback Bryan Gurney
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

From: Hannes Reinecke <hare@kernel.org>

FPIN LI (link integrity) messages are received when the attached
fabric detects hardware errors. In response to these messages I/O
should be directed away from the affected ports, and only used
if the 'optimized' paths are unavailable.
To handle this a new controller flag 'NVME_CTRL_MARGINAL' is added
which will cause the multipath scheduler to skip these paths when
checking for 'optimized' paths. They are, however, still eligible
for non-optimized path selected. The flag is cleared upon reset as then the
faulty hardware might be replaced.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Tested-by: Bryan Gurney <bgurney@redhat.com>
Reviewed-by: John Meneghini <jmeneghi@redhat.com>
Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
---
 drivers/nvme/host/fc.c        |  4 ++++
 drivers/nvme/host/multipath.c | 17 +++++++++++------
 2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 08a5ea3e9383..8787c566f939 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -786,6 +786,10 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
 		"Reconnect", ctrl->cnum);
 
 	set_bit(ASSOC_FAILED, &ctrl->flags);
+
+	/* clear 'marginal' flag as controller will be reset */
+	clear_bit(NVME_CTRL_MARGINAL, &ctrl->flags);
+
 	nvme_reset_ctrl(&ctrl->ctrl);
 }
 
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 316a269842fa..8d4e54bb4261 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -324,11 +324,14 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
 
 		switch (ns->ana_state) {
 		case NVME_ANA_OPTIMIZED:
-			if (distance < found_distance) {
-				found_distance = distance;
-				found = ns;
+			if (!nvme_ctrl_is_marginal(ns->ctrl)) {
+				if (distance < found_distance) {
+					found_distance = distance;
+					found = ns;
+				}
+				break;
 			}
-			break;
+			fallthrough;
 		case NVME_ANA_NONOPTIMIZED:
 			if (distance < fallback_distance) {
 				fallback_distance = distance;
@@ -381,7 +384,8 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head)
 
 		if (ns->ana_state == NVME_ANA_OPTIMIZED) {
 			found = ns;
-			goto out;
+			if (!nvme_ctrl_is_marginal(ns->ctrl))
+				goto out;
 		}
 		if (ns->ana_state == NVME_ANA_NONOPTIMIZED)
 			found = ns;
@@ -445,7 +449,8 @@ static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head)
 static inline bool nvme_path_is_optimized(struct nvme_ns *ns)
 {
 	return nvme_ctrl_state(ns->ctrl) == NVME_CTRL_LIVE &&
-		ns->ana_state == NVME_ANA_OPTIMIZED;
+		ns->ana_state == NVME_ANA_OPTIMIZED &&
+		!nvme_ctrl_is_marginal(ns->ctrl);
 }
 
 static struct nvme_ns *nvme_numa_path(struct nvme_ns_head *head)
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v8 4/8] nvme-fc: nvme_fc_fpin_rcv() callback
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
                   ` (2 preceding siblings ...)
  2025-07-09 21:19 ` [PATCH v8 3/8] nvme-fc: marginal path handling Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 5/8] lpfc: enable FPIN notification for NVMe Bryan Gurney
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

From: Hannes Reinecke <hare@kernel.org>

Add a callback nvme_fc_fpin_rcv() to evaluate the FPIN LI TLV
information and set the 'marginal' path status for all affected
rports.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Tested-by: Bryan Gurney <bgurney@redhat.com>
Reviewed-by: John Meneghini <jmeneghi@redhat.com>
Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
---
 drivers/nvme/host/fc.c         | 95 ++++++++++++++++++++++++++++++++++
 include/linux/nvme-fc-driver.h |  3 ++
 2 files changed, 98 insertions(+)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 8787c566f939..210b6d57f76e 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -3724,6 +3724,101 @@ static struct nvmf_transport_ops nvme_fc_transport = {
 	.create_ctrl	= nvme_fc_create_ctrl,
 };
 
+static struct nvme_fc_rport *nvme_fc_rport_from_wwpn(struct nvme_fc_lport *lport,
+		u64 rport_wwpn)
+{
+	struct nvme_fc_rport *rport;
+
+	list_for_each_entry(rport, &lport->endp_list, endp_list) {
+		if (!nvme_fc_rport_get(rport))
+			continue;
+		if (rport->remoteport.port_name == rport_wwpn &&
+		    rport->remoteport.port_role & FC_PORT_ROLE_NVME_TARGET)
+			return rport;
+		nvme_fc_rport_put(rport);
+	}
+	return NULL;
+}
+
+static void
+nvme_fc_fpin_li_lport_update(struct nvme_fc_lport *lport, struct fc_fn_li_desc *li)
+{
+	unsigned int i, pname_count = be32_to_cpu(li->pname_count);
+	u64 attached_wwpn = be64_to_cpu(li->attached_wwpn);
+	struct nvme_fc_rport *attached_rport;
+
+	for (i = 0; i < pname_count; i++) {
+		struct nvme_fc_rport *rport;
+		u64 wwpn = be64_to_cpu(li->pname_list[i]);
+
+		rport = nvme_fc_rport_from_wwpn(lport, wwpn);
+		if (!rport)
+			continue;
+		if (wwpn != attached_wwpn) {
+			struct nvme_fc_ctrl *ctrl;
+
+			spin_lock_irq(&rport->lock);
+			list_for_each_entry(ctrl, &rport->ctrl_list, ctrl_list)
+				set_bit(NVME_CTRL_MARGINAL, &ctrl->ctrl.flags);
+			spin_unlock_irq(&rport->lock);
+		}
+		nvme_fc_rport_put(rport);
+	}
+
+	attached_rport = nvme_fc_rport_from_wwpn(lport, attached_wwpn);
+	if (attached_rport) {
+		struct nvme_fc_ctrl *ctrl;
+
+		spin_lock_irq(&attached_rport->lock);
+		list_for_each_entry(ctrl, &attached_rport->ctrl_list, ctrl_list)
+			set_bit(NVME_CTRL_MARGINAL, &ctrl->ctrl.flags);
+		spin_unlock_irq(&attached_rport->lock);
+		nvme_fc_rport_put(attached_rport);
+	}
+}
+
+/**
+ * fc_host_fpin_rcv() - Process a received FPIN.
+ * @localport:		local port the FPIN was received on
+ * @fpin_len:		length of FPIN payload, in bytes
+ * @fpin_buf:		pointer to FPIN payload
+ * Notes:
+ *	This routine assumes no locks are held on entry.
+ */
+void
+nvme_fc_fpin_rcv(struct nvme_fc_local_port *localport,
+		 u32 fpin_len, char *fpin_buf)
+{
+	struct nvme_fc_lport *lport;
+	struct fc_els_fpin *fpin = (struct fc_els_fpin *)fpin_buf;
+	union fc_tlv_desc *tlv;
+	u32 bytes_remain;
+	u32 dtag;
+
+	if (!localport)
+		return;
+	lport = localport_to_lport(localport);
+	tlv = &fpin->fpin_desc[0];
+	bytes_remain = fpin_len - offsetof(struct fc_els_fpin, fpin_desc);
+	bytes_remain = min_t(u32, bytes_remain, be32_to_cpu(fpin->desc_len));
+
+	while (bytes_remain >= FC_TLV_DESC_HDR_SZ &&
+	       bytes_remain >= FC_TLV_DESC_SZ_FROM_LENGTH(tlv)) {
+		dtag = be32_to_cpu(tlv->hdr.desc_tag);
+		switch (dtag) {
+		case ELS_DTAG_LNK_INTEGRITY:
+			nvme_fc_fpin_li_lport_update(lport, &tlv->li);
+			break;
+		default:
+			break;
+		}
+
+		bytes_remain -= FC_TLV_DESC_SZ_FROM_LENGTH(tlv);
+		tlv = fc_tlv_next_desc(tlv);
+	}
+}
+EXPORT_SYMBOL(nvme_fc_fpin_rcv);
+
 /* Arbitrary successive failures max. With lots of subsystems could be high */
 #define DISCOVERY_MAX_FAIL	20
 
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index 9f6acadfe0c8..bcd3b1e5a256 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -536,6 +536,9 @@ void nvme_fc_rescan_remoteport(struct nvme_fc_remote_port *remoteport);
 int nvme_fc_set_remoteport_devloss(struct nvme_fc_remote_port *remoteport,
 			u32 dev_loss_tmo);
 
+void nvme_fc_fpin_rcv(struct nvme_fc_local_port *localport,
+		      u32 fpin_len, char *fpin_buf);
+
 /*
  * Routine called to pass a NVME-FC LS request, received by the lldd,
  * to the nvme-fc transport.
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v8 5/8] lpfc: enable FPIN notification for NVMe
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
                   ` (3 preceding siblings ...)
  2025-07-09 21:19 ` [PATCH v8 4/8] nvme-fc: nvme_fc_fpin_rcv() callback Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 21:19 ` [PATCH v8 6/8] qla2xxx: " Bryan Gurney
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

From: Hannes Reinecke <hare@kernel.org>

Call 'nvme_fc_fpin_rcv()' to enable FPIN notifications for NVMe.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: Justin Tee <justin.tee@broadcom.com>
Tested-by: Bryan Gurney <bgurney@redhat.com>
Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
---
 drivers/scsi/lpfc/lpfc_els.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index c7cbc5b50dfe..146ed6fd41af 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -33,6 +33,7 @@
 #include <scsi/scsi_transport_fc.h>
 #include <uapi/scsi/fc/fc_fs.h>
 #include <uapi/scsi/fc/fc_els.h>
+#include <linux/nvme-fc-driver.h>
 
 #include "lpfc_hw4.h"
 #include "lpfc_hw.h"
@@ -10249,9 +10250,15 @@ lpfc_els_rcv_fpin(struct lpfc_vport *vport, void *p, u32 fpin_length)
 		fpin_length += sizeof(struct fc_els_fpin); /* the entire FPIN */
 
 		/* Send every descriptor individually to the upper layer */
-		if (deliver)
+		if (deliver) {
 			fc_host_fpin_rcv(lpfc_shost_from_vport(vport),
 					 fpin_length, (char *)fpin, 0);
+#if (IS_ENABLED(CONFIG_NVME_FC))
+			if (vport->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
+				nvme_fc_fpin_rcv(vport->localport,
+						 fpin_length, (char *)fpin);
+#endif
+		}
 		desc_cnt++;
 	}
 }
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v8 6/8] qla2xxx: enable FPIN notification for NVMe
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
                   ` (4 preceding siblings ...)
  2025-07-09 21:19 ` [PATCH v8 5/8] lpfc: enable FPIN notification for NVMe Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 22:01   ` John Meneghini
  2025-07-09 21:19 ` [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state() Bryan Gurney
  2025-07-09 22:05 ` [PATCH v8 0/8] nvme-fc: FPIN link integrity handling John Meneghini
  7 siblings, 1 reply; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

From: Hannes Reinecke <hare@kernel.org>

Call 'nvme_fc_fpin_rcv()' to enable FPIN notifications for NVMe.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Reviewed-by: John Meneghini <jmeneghi@redhat.com>
Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
---
 drivers/scsi/qla2xxx/qla_isr.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index c4c6b5c6658c..f5e40e22ad7d 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -46,6 +46,9 @@ qla27xx_process_purex_fpin(struct scsi_qla_host *vha, struct purex_item *item)
 		       pkt, pkt_size);
 
 	fc_host_fpin_rcv(vha->host, pkt_size, (char *)pkt, 0);
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	nvme_fc_fpin_rcv(vha->nvme_local_port, pkt_size, (char *)pkt);
+#endif
 }
 
 const char *const port_state_str[] = {
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
                   ` (5 preceding siblings ...)
  2025-07-09 21:19 ` [PATCH v8 6/8] qla2xxx: " Bryan Gurney
@ 2025-07-09 21:19 ` Bryan Gurney
  2025-07-09 22:12   ` Keith Busch
  2025-07-09 22:05 ` [PATCH v8 0/8] nvme-fc: FPIN link integrity handling John Meneghini
  7 siblings, 1 reply; 17+ messages in thread
From: Bryan Gurney @ 2025-07-09 21:19 UTC (permalink / raw)
  To: linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare, bgurney,
	jmeneghi

If a controller has received a link integrity or congestion event, and
has the NVME_CTRL_MARGINAL flag set, emit "marginal" in the state
instead of "live", to identify the marginal paths.

Co-developed-by: John Meneghini <jmeneghi@redhat.com>
Signed-off-by: John Meneghini <jmeneghi@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
Signed-off-by: Bryan Gurney <bgurney@redhat.com>
---
 drivers/nvme/host/sysfs.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 29430949ce2f..4a6135c2f9cb 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -430,7 +430,9 @@ static ssize_t nvme_sysfs_show_state(struct device *dev,
 	};
 
 	if (state < ARRAY_SIZE(state_name) && state_name[state])
-		return sysfs_emit(buf, "%s\n", state_name[state]);
+		return sysfs_emit(buf, "%s\n",
+			(nvme_ctrl_is_marginal(ctrl)) ? "marginal" :
+			state_name[state]);
 
 	return sysfs_emit(buf, "unknown state\n");
 }
-- 
2.50.0



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 6/8] qla2xxx: enable FPIN notification for NVMe
  2025-07-09 21:19 ` [PATCH v8 6/8] qla2xxx: " Bryan Gurney
@ 2025-07-09 22:01   ` John Meneghini
  0 siblings, 0 replies; 17+ messages in thread
From: John Meneghini @ 2025-07-09 22:01 UTC (permalink / raw)
  To: Bryan Gurney, linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare

Some bad news about this patch.

All tests with the LPFC driver pass, but when we tested this series on a system with a QLA adapter we see the following problem.

Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel:
------------[ cut here ]------------
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: memcpy: detected field-spanning write (size 60) of single field "((uint8_t *)fpin_pkt + buffer_copy_offset)" at drivers/scsi/qla2xxx/qla_isr.c:1221 (size 44)
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: WARNING: CPU: 73 PID: 0 at drivers/scsi/qla2xxx/qla_isr.c:1221 qla27xx_copy_fpin_pkt.isra.0+0x292/0x3a0 [qla2xxx]
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: Modules linked in: nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 rfkill ip_set nf_tables irdma i40e ib_uverbs ib_core intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common intel_ifs i10nm_edac skx_edac_common nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel vfat fat kvm dax_hmem cxl_acpi cxl_port irqbypass iTCO_wdt cxl_core rapl mei_me iTCO_vendor_support ipmi_ssif dell_smbios acpi_power_meter intel_cstate pmt_telemetry platform_profile isst_if_mbox_pci isst_if_mmio sd_mod ice pmt_class intel_sdsi dcdbas intel_uncore mgag200 einj dell_wmi_descriptor wmi_bmof tg3 pcspkr sg mei i2c_i801 isst_if_common gnss intel_vsec i2c_algo_bit libie i2c_smbus ipmi_si i2c_ismt acpi_ipmi ipmi_devintf ipmi_msghandler dm_multipath fuse dm_mod loop nfnetlink xfs qla2xxx iaa_crypto qat_4xxx intel_qat nvme_fc ahci nvme libahci nvme_fabrics idxd
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: nvme_core libata ghash_clmulni_intel scsi_transport_fc nvme_keyring idxd_bus crc8 nvme_auth wmi pinctrl_emmitsburg
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: CPU: 73 UID: 0 PID: 0 Comm: swapper/73 Kdump: loaded Tainted: G S 6.16.0-rc5.nvmefpinli-v8+ #1 PREEMPT(voluntary)
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: Tainted: [S]=CPU_OUT_OF_SPEC
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: Hardware name: Dell Inc. PowerEdge R660/0HGTK9, BIOS 2.5.4 01/16/2025
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RIP: 0010:qla27xx_copy_fpin_pkt.isra.0+0x292/0x3a0 [qla2xxx]
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: Code: e9 48 c7 c2 38 c0 b2 c0 48 89 44 24 38 48 c7 c7 98 c0 b2 c0 4c 89 44 24 28 48 89 74 24 20 c6 05 db 6e 1f 00 01 e8 7e 06 c4 c3 <0f> 0b 48 8b 44 24 38 4c 8b 44 24 28 48 8b 74 24 20 e9 d1 fe ff ff
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RSP: 0018:ff5e253987a04e40 EFLAGS: 00010082
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RAX: 0000000000000000 RBX: ff3c99b7dc380cc0 RCX: 0000000000000027
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RDX: ff3c99c77f71c148 RSI: 0000000000000001 RDI: ff3c99c77f71c140
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RBP: 000000000000002c R08: 0000000000000000 R09: ff5e253987a04cb8
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: R10: ffffffff86124408 R11: 0000000000000003 R12: 000000000000003c
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: R13: 000000000000003c R14: ff3c99b7eb156600 R15: 0000000000000014
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: FS:  0000000000000000(0000) GS:ff3c99c7f89c5000(0000) knlGS:0000000000000000
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: CR2: 00007f19d9fce004 CR3: 0000001ab0a24003 CR4: 0000000000f73ef0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: PKRU: 55555554
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: Call Trace:
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel:  <IRQ>
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: qla24xx_process_response_queue+0xac7/0xbd0 [qla2xxx]
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: ? running_clock+0x10/0x30
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: ? watchdog_timer_fn+0x127/0x1c0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: qla24xx_msix_rsp_q+0x43/0xb0 [qla2xxx]
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: __handle_irq_event_percpu+0x47/0x1a0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: handle_irq_event+0x38/0x90
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: handle_edge_irq+0x90/0x1e0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: __common_interrupt+0x3b/0x90
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: common_interrupt+0x80/0xa0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel:  </IRQ>
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel:  <TASK>
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: asm_common_interrupt+0x26/0x40
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RIP: 0010:cpuidle_enter_state+0xc0/0x410
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: Code: 69 02 00 00 e8 e1 36 34 ff e8 dc f1 ff ff 49 89 c5 0f 1f 44 00 00 31 ff e8 bd f7 32 ff 45 84 ff 0f 85 3b 02 00 00 fb 45 85 f6 <0f> 88 84 01 00 00 49 63 d6 48 8d 04 52 48 8d 04 82 49 8d 0c c4 48
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RSP: 0018:ff5e2539869e7e68 EFLAGS: 00000206
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RAX: ff3c99c7f89c5000 RBX: 0000000000000003 RCX: 0000000000000000
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RDX: 000000ee5c1e784a RSI: ffffffe9b859fd90 RDI: 0000000000000000
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: RBP: ff9025397f9000b0 R08: 0000000000000000 R09: 0000001955cd9750
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: R10: 00000000003c5804 R11: ff3c99c77f72f9ec R12: ffffffff862d8040
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: R13: 000000ee5c1e784a R14: 0000000000000003 R15: 0000000000000000
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: ? cpuidle_enter_state+0xb3/0x410
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: cpuidle_enter+0x2d/0x40
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: cpuidle_idle_call+0x111/0x1a0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: do_idle+0x73/0xd0
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: cpu_startup_entry+0x29/0x30
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: start_secondary+0x114/0x140
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: common_startup_64+0x13e/0x141
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel:  </TASK>
Jul 09 16:36:59 rhel-storage-107.fast.eng.rdu2.dc.redhat.com kernel: ---[ end trace 0000000000000000 ]---


John A. Meneghini
Senior Principal Platform Storage Engineer
RHEL SST - Platform Storage Group
jmeneghi@redhat.com

On 7/9/25 5:19 PM, Bryan Gurney wrote:
> From: Hannes Reinecke <hare@kernel.org>
> 
> Call 'nvme_fc_fpin_rcv()' to enable FPIN notifications for NVMe.
> 
> Signed-off-by: Hannes Reinecke <hare@kernel.org>
> Reviewed-by: John Meneghini <jmeneghi@redhat.com>
> Tested-by: Muneendra Kumar <muneendra.kumar@broadcom.com>
> ---
>   drivers/scsi/qla2xxx/qla_isr.c | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
> index c4c6b5c6658c..f5e40e22ad7d 100644
> --- a/drivers/scsi/qla2xxx/qla_isr.c
> +++ b/drivers/scsi/qla2xxx/qla_isr.c
> @@ -46,6 +46,9 @@ qla27xx_process_purex_fpin(struct scsi_qla_host *vha, struct purex_item *item)
>   		       pkt, pkt_size);
>   
>   	fc_host_fpin_rcv(vha->host, pkt_size, (char *)pkt, 0);
> +#if (IS_ENABLED(CONFIG_NVME_FC))
> +	nvme_fc_fpin_rcv(vha->nvme_local_port, pkt_size, (char *)pkt);
> +#endif
>   }
>   
>   const char *const port_state_str[] = {


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 0/8] nvme-fc: FPIN link integrity handling
  2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
                   ` (6 preceding siblings ...)
  2025-07-09 21:19 ` [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state() Bryan Gurney
@ 2025-07-09 22:05 ` John Meneghini
  7 siblings, 0 replies; 17+ messages in thread
From: John Meneghini @ 2025-07-09 22:05 UTC (permalink / raw)
  To: Bryan Gurney, linux-nvme, kbusch, hch, sagi, axboe
  Cc: james.smart, dick.kennedy, njavali, linux-scsi, hare

I've opened an upstream bugzilla to track this enhancement.

https://bugzilla.kernel.org/show_bug.cgi?id=220329

I've asked Bryan to record all information about the unit tests we are developing for FPIN there.

John A. Meneghini
Senior Principal Platform Storage Engineer
RHEL SST - Platform Storage Group
jmeneghi@redhat.com

On 7/9/25 5:19 PM, Bryan Gurney wrote:
> FPIN LI (link integrity) messages are received when the attached
> fabric detects hardware errors. In response to these messages I/O
> should be directed away from the affected ports, and only used
> if the 'optimized' paths are unavailable.
> Upon port reset the paths should be put back in service as the
> affected hardware might have been replaced.
> This patch adds a new controller flag 'NVME_CTRL_MARGINAL'
> which will be checked during multipath path selection, causing the
> path to be skipped when checking for 'optimized' paths. If no
> optimized paths are available the 'marginal' paths are considered
> for path selection alongside the 'non-optimized' paths.
> It also introduces a new nvme-fc callback 'nvme_fc_fpin_rcv()' to
> evaluate the FPIN LI TLV payload and set the 'marginal' state on
> all affected rports.
> 
> The testing for this patch set was performed by Bryan Gurney, using the
> process outlined by John Meneghini's presentation at LSFMM 2024, where
> the fibre channel switch sends an FPIN notification on a specific switch
> port, and the following is checked on the initiator:
> 
> 1. The controllers corresponding to the paths on the port that has
> received the notification are showing a set NVME_CTRL_MARGINAL flag.
> 
>     \
>      +- nvme4 fc traddr=c,host_traddr=e live optimized
>      +- nvme5 fc traddr=8,host_traddr=e live non-optimized
>      +- nvme8 fc traddr=e,host_traddr=f marginal optimized
>      +- nvme9 fc traddr=a,host_traddr=f marginal non-optimized
> 
> 2. The I/O statistics of the test namespace show no I/O activity on the
> controllers with NVME_CTRL_MARGINAL set.
> 
>     Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s
>     nvme4c4n1         0.00         0.00         0.00         0.00
>     nvme4c5n1     25001.00         0.00        97.66         0.00
>     nvme4c9n1     25000.00         0.00        97.66         0.00
>     nvme4n1       50011.00         0.00       195.36         0.00
> 
> 
>     Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s
>     nvme4c4n1         0.00         0.00         0.00         0.00
>     nvme4c5n1     48360.00         0.00       188.91         0.00
>     nvme4c9n1      1642.00         0.00         6.41         0.00
>     nvme4n1       49981.00         0.00       195.24         0.00
> 
> 
>     Device             tps    MB_read/s    MB_wrtn/s    MB_dscd/s
>     nvme4c4n1         0.00         0.00         0.00         0.00
>     nvme4c5n1     50001.00         0.00       195.32         0.00
>     nvme4c9n1         0.00         0.00         0.00         0.00
>     nvme4n1       50016.00         0.00       195.38         0.00
> 
> Link: https://people.redhat.com/jmeneghi/LSFMM_2024/LSFMM_2024_NVMe_Cancel_and_FPIN.pdf
> 
> More rigorous testing was also performed to ensure proper path migration
> on each of the eight different FPIN link integrity events, particularly
> during a scenario where there are only non-optimized paths available, in
> a state where all paths are marginal.  On a configuration with a
> round-robin iopolicy, when all paths on the host show as marginal, I/O
> continues on the optimized path that was most recently non-marginal.
>  From this point, of both of the optimized paths are down, I/O properly
> continues on the remaining paths.
> 
> The testing so far has been done with an Emulex host bus adapter using
> lpfc.  When tested on a QLogic host bus adapter, a warning was found
> when the first FPIN link integrity event was received by the host:
> 
>    kernel: memcpy: detected field-spanning write (size 60) of single field
>    "((uint8_t *)fpin_pkt + buffer_copy_offset)"
>    at drivers/scsi/qla2xxx/qla_isr.c:1221 (size 44)
> 
> Line 1221 of qla_isr.c is in the function qla27xx_copy_fpin_pkt().
> 
> 
> Changes to the original submission:
> - Changed flag name to 'marginal'
> - Do not block marginal path; influence path selection instead
>    to de-prioritize marginal paths
> 
> Changes to v2:
> - Split off driver-specific modifications
> - Introduce 'union fc_tlv_desc' to avoid casts
> 
> Changes to v3:
> - Include reviews from Justin Tee
> - Split marginal path handling patch
> 
> Changes to v4:
> - Change 'u8' to '__u8' on fc_tlv_desc to fix a failure to build
> - Print 'marginal' instead of 'live' in the state of controllers
>    when they are marginal
> 
> Changes to v5:
> - Minor spelling corrections to patch descriptions
> 
> Changes to v6:
> - No code changes; added note about additional testing
> 
> Changes to v7:
> - Split nvme core marginal flag addition into its own patch
> - Add patch for queue_depth marginal path support
> 
> Bryan Gurney (2):
>    nvme: add NVME_CTRL_MARGINAL flag
>    nvme: sysfs: emit the marginal path state in show_state()
> 
> Hannes Reinecke (5):
>    fc_els: use 'union fc_tlv_desc'
>    nvme-fc: marginal path handling
>    nvme-fc: nvme_fc_fpin_rcv() callback
>    lpfc: enable FPIN notification for NVMe
>    qla2xxx: enable FPIN notification for NVMe
> 
> John Meneghini (1):
>    nvme-multipath: queue-depth support for marginal paths
> 
>   drivers/nvme/host/core.c         |   1 +
>   drivers/nvme/host/fc.c           |  99 +++++++++++++++++++
>   drivers/nvme/host/multipath.c    |  24 +++--
>   drivers/nvme/host/nvme.h         |   6 ++
>   drivers/nvme/host/sysfs.c        |   4 +-
>   drivers/scsi/lpfc/lpfc_els.c     |  84 ++++++++--------
>   drivers/scsi/qla2xxx/qla_isr.c   |   3 +
>   drivers/scsi/scsi_transport_fc.c |  27 +++--
>   include/linux/nvme-fc-driver.h   |   3 +
>   include/uapi/scsi/fc/fc_els.h    | 165 +++++++++++++++++--------------
>   10 files changed, 275 insertions(+), 141 deletions(-)
> 



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-09 21:19 ` [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state() Bryan Gurney
@ 2025-07-09 22:12   ` Keith Busch
  2025-07-15 19:42     ` John Meneghini
  0 siblings, 1 reply; 17+ messages in thread
From: Keith Busch @ 2025-07-09 22:12 UTC (permalink / raw)
  To: Bryan Gurney
  Cc: linux-nvme, hch, sagi, axboe, james.smart, dick.kennedy, njavali,
	linux-scsi, hare, jmeneghi

On Wed, Jul 09, 2025 at 05:19:18PM -0400, Bryan Gurney wrote:
> If a controller has received a link integrity or congestion event, and
> has the NVME_CTRL_MARGINAL flag set, emit "marginal" in the state
> instead of "live", to identify the marginal paths.

IMO, this attribute looks more aligned to report in the ana_state
instead of overriding the controller's state.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-09 22:12   ` Keith Busch
@ 2025-07-15 19:42     ` John Meneghini
  2025-07-15 20:03       ` Keith Busch
  0 siblings, 1 reply; 17+ messages in thread
From: John Meneghini @ 2025-07-15 19:42 UTC (permalink / raw)
  To: Keith Busch, Bryan Gurney
  Cc: linux-nvme, hch, sagi, axboe, james.smart, dick.kennedy, njavali,
	linux-scsi, hare

On 7/9/25 6:12 PM, Keith Busch wrote:
> On Wed, Jul 09, 2025 at 05:19:18PM -0400, Bryan Gurney wrote:
>> If a controller has received a link integrity or congestion event, and
>> has the NVME_CTRL_MARGINAL flag set, emit "marginal" in the state
>> instead of "live", to identify the marginal paths.
> 
> IMO, this attribute looks more aligned to report in the ana_state
> instead of overriding the controller's state.
> 

We can't really do this because the ANA state is a documented protocol state.

The linux controller state is purely a linux software defined state.  Unless I am wrong, there is nothing in the NVMe specification which defines the nvme_ctrl_state.

This is purely a linux definition and we should be able to change is any way we want.

We debated adding a new NVME_CTRL_MARGINAL state to this data structure,

enum nvme_ctrl_state {
         NVME_CTRL_NEW,
         NVME_CTRL_LIVE,
         NVME_CTRL_RESETTING,
         NVME_CTRL_CONNECTING,
         NVME_CTRL_DELETING,
         NVME_CTRL_DELETING_NOIO,
         NVME_CTRL_DEAD,
};

If you don't like the flag we can do that. However, that doesn't seem worth the effort since Hannes has this working now with a flag.

/John



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-15 19:42     ` John Meneghini
@ 2025-07-15 20:03       ` Keith Busch
  2025-07-16  6:07         ` Hannes Reinecke
  0 siblings, 1 reply; 17+ messages in thread
From: Keith Busch @ 2025-07-15 20:03 UTC (permalink / raw)
  To: John Meneghini
  Cc: Bryan Gurney, linux-nvme, hch, sagi, axboe, james.smart,
	dick.kennedy, njavali, linux-scsi, hare

On Tue, Jul 15, 2025 at 03:42:32PM -0400, John Meneghini wrote:
> On 7/9/25 6:12 PM, Keith Busch wrote:
> > On Wed, Jul 09, 2025 at 05:19:18PM -0400, Bryan Gurney wrote:
> > > If a controller has received a link integrity or congestion event, and
> > > has the NVME_CTRL_MARGINAL flag set, emit "marginal" in the state
> > > instead of "live", to identify the marginal paths.
> > 
> > IMO, this attribute looks more aligned to report in the ana_state
> > instead of overriding the controller's state.
> > 
> 
> We can't really do this because the ANA state is a documented protocol state.
> 
> The linux controller state is purely a linux software defined state.  Unless I am wrong, there is nothing in the NVMe specification which defines the nvme_ctrl_state.

Totally correct.
 
> This is purely a linux definition and we should be able to change is any way we want.

My kneejerk reaction is against adding new controller states. We have
state checks sprinkled about, and special states just make that more
fragile.
 
> We debated adding a new NVME_CTRL_MARGINAL state to this data structure,
> 
> enum nvme_ctrl_state {
>         NVME_CTRL_NEW,
>         NVME_CTRL_LIVE,
>         NVME_CTRL_RESETTING,
>         NVME_CTRL_CONNECTING,
>         NVME_CTRL_DELETING,
>         NVME_CTRL_DELETING_NOIO,
>         NVME_CTRL_DEAD,
> };
> 
> If you don't like the flag we can do that. However, that doesn't seem worth the effort since Hannes has this working now with a flag.

What you're describing is a "path" state, not a controller state which
is why I'm suggesting the "ana_state" attribute since nothing else
represents the path fitness. If nvme can't describe this condition, then
maybe it should?

Where does this 'FPIN LI' message originate from? The end point or
something inbetween? If it's the endpoint (or if both sides get the same
message?), then an ANA state to non-optimal should be possible, no? And
we already have the infrastructure to react to changing ANA states, so
you can transition to optimal if something gets repaired.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-15 20:03       ` Keith Busch
@ 2025-07-16  6:07         ` Hannes Reinecke
  2025-07-22  2:57           ` Keith Busch
  0 siblings, 1 reply; 17+ messages in thread
From: Hannes Reinecke @ 2025-07-16  6:07 UTC (permalink / raw)
  To: Keith Busch, John Meneghini
  Cc: Bryan Gurney, linux-nvme, hch, sagi, axboe, james.smart,
	dick.kennedy, njavali, linux-scsi

On 7/15/25 22:03, Keith Busch wrote:
> On Tue, Jul 15, 2025 at 03:42:32PM -0400, John Meneghini wrote:
>> On 7/9/25 6:12 PM, Keith Busch wrote:
>>> On Wed, Jul 09, 2025 at 05:19:18PM -0400, Bryan Gurney wrote:
>>>> If a controller has received a link integrity or congestion event, and
>>>> has the NVME_CTRL_MARGINAL flag set, emit "marginal" in the state
>>>> instead of "live", to identify the marginal paths.
>>>
>>> IMO, this attribute looks more aligned to report in the ana_state
>>> instead of overriding the controller's state.
>>>
>>
>> We can't really do this because the ANA state is a documented protocol state.
>>
>> The linux controller state is purely a linux software defined state.  Unless
 >> I am wrong, there is nothing in the NVMe specification which defines
 >> the nvme_ctrl_state.>
> Totally correct.
>   
>> This is purely a linux definition and we should be able to change is any way we want.
> 
> My kneejerk reaction is against adding new controller states. We have
> state checks sprinkled about, and special states just make that more
> fragile.
>   
Yeah, controller states are not a good fit. We've seen the issues when
trying to introduce a new state for firmware update.

>> We debated adding a new NVME_CTRL_MARGINAL state to this data structure,
>>
>> enum nvme_ctrl_state {
>>          NVME_CTRL_NEW,
>>          NVME_CTRL_LIVE,
>>          NVME_CTRL_RESETTING,
>>          NVME_CTRL_CONNECTING,
>>          NVME_CTRL_DELETING,
>>          NVME_CTRL_DELETING_NOIO,
>>          NVME_CTRL_DEAD,
>> };
>>
>> If you don't like the flag we can do that. However, that doesn't seem worth the effort since Hannes has this working now with a flag.
> 
> What you're describing is a "path" state, not a controller state which
> is why I'm suggesting the "ana_state" attribute since nothing else
> represents the path fitness. If nvme can't describe this condition, then
> maybe it should?
> 
We probably could, but that feels a bit cumbersome.
Thing is, the FPIN LI (link integrity) message is just one a set of
possible messages (congestion is another, but even more are defined).
When adding a separate ANA state for that question would be raised
how the other state would fit into that.
 From a conceptual side FPIN LI really is equivalent to a flaky
path, which can happen at any time without any specific information
anyway.
Again making it questionable whether it should be specified in terms
of ANA states.

> Where does this 'FPIN LI' message originate from? The end point or
> something inbetween? If it's the endpoint (or if both sides get the same
> message?), then an ANA state to non-optimal should be possible, no? And
> we already have the infrastructure to react to changing ANA states, so
> you can transition to optimal if something gets repaired.

It's typically generated by the fabric/switch once it detects a link
integrity problem on one of the links on a given path.

As mentioned above, it really is a attempt to codify the 'flaky path'
scenario, where occasionaly errors are generated but I/O remains
possible. So it really is an overlay over the ANA states, as _any_
path might be affected.
This discussion only centered around 'optimal' paths as our path
selectors really only care about optimized paths; non-optimized
paths are not considered here.
Which might skew the view of this patchset somewhat.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-16  6:07         ` Hannes Reinecke
@ 2025-07-22  2:57           ` Keith Busch
  2025-07-22  6:41             ` Hannes Reinecke
  0 siblings, 1 reply; 17+ messages in thread
From: Keith Busch @ 2025-07-22  2:57 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: John Meneghini, Bryan Gurney, linux-nvme, hch, sagi, axboe,
	james.smart, dick.kennedy, njavali, linux-scsi

On Wed, Jul 16, 2025 at 08:07:51AM +0200, Hannes Reinecke wrote:
> On 7/15/25 22:03, Keith Busch wrote:
> > 
> > What you're describing is a "path" state, not a controller state which
> > is why I'm suggesting the "ana_state" attribute since nothing else
> > represents the path fitness. If nvme can't describe this condition, then
> > maybe it should?
> > 
> We probably could, but that feels a bit cumbersome.
> Thing is, the FPIN LI (link integrity) message is just one a set of
> possible messages (congestion is another, but even more are defined).
> When adding a separate ANA state for that question would be raised
> how the other state would fit into that.
> From a conceptual side FPIN LI really is equivalent to a flaky
> path, which can happen at any time without any specific information
> anyway.
> Again making it questionable whether it should be specified in terms
> of ANA states.

I see. Re-reading ANA, it is more aligned to describing a controller as
active/passive or primary/secondary to the backing storage access rather
than the state of the host nexus, so I agree it's not well suited
for an ANA state. :(
 
> > Where does this 'FPIN LI' message originate from? The end point or
> > something inbetween? If it's the endpoint (or if both sides get the same
> > message?), then an ANA state to non-optimal should be possible, no? And
> > we already have the infrastructure to react to changing ANA states, so
> > you can transition to optimal if something gets repaired.
> 
> It's typically generated by the fabric/switch once it detects a link
> integrity problem on one of the links on a given path.
> 
> As mentioned above, it really is a attempt to codify the 'flaky path'
> scenario, where occasionaly errors are generated but I/O remains
> possible. So it really is an overlay over the ANA states, as _any_
> path might be affected.
> This discussion only centered around 'optimal' paths as our path
> selectors really only care about optimized paths; non-optimized
> paths are not considered here.
> Which might skew the view of this patchset somewhat.

Okay, but can we call it "degraded" instead of "marginal"? The latter
implies the poor quality is endemic to that path rather than a temporary
condition.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-22  2:57           ` Keith Busch
@ 2025-07-22  6:41             ` Hannes Reinecke
  2025-07-23 16:58               ` John Meneghini
  0 siblings, 1 reply; 17+ messages in thread
From: Hannes Reinecke @ 2025-07-22  6:41 UTC (permalink / raw)
  To: Keith Busch
  Cc: John Meneghini, Bryan Gurney, linux-nvme, hch, sagi, axboe,
	james.smart, dick.kennedy, njavali, linux-scsi

On 7/22/25 04:57, Keith Busch wrote:
> On Wed, Jul 16, 2025 at 08:07:51AM +0200, Hannes Reinecke wrote:
>> On 7/15/25 22:03, Keith Busch wrote:
>>>
>>> What you're describing is a "path" state, not a controller state which
>>> is why I'm suggesting the "ana_state" attribute since nothing else
>>> represents the path fitness. If nvme can't describe this condition, then
>>> maybe it should?
>>>
>> We probably could, but that feels a bit cumbersome.
>> Thing is, the FPIN LI (link integrity) message is just one a set of
>> possible messages (congestion is another, but even more are defined).
>> When adding a separate ANA state for that question would be raised
>> how the other state would fit into that.
>>  From a conceptual side FPIN LI really is equivalent to a flaky
>> path, which can happen at any time without any specific information
>> anyway.
>> Again making it questionable whether it should be specified in terms
>> of ANA states.
> 
> I see. Re-reading ANA, it is more aligned to describing a controller as
> active/passive or primary/secondary to the backing storage access rather
> than the state of the host nexus, so I agree it's not well suited
> for an ANA state. :(
>   
>>> Where does this 'FPIN LI' message originate from? The end point or
>>> something inbetween? If it's the endpoint (or if both sides get the same
>>> message?), then an ANA state to non-optimal should be possible, no? And
>>> we already have the infrastructure to react to changing ANA states, so
>>> you can transition to optimal if something gets repaired.
>>
>> It's typically generated by the fabric/switch once it detects a link
>> integrity problem on one of the links on a given path.
>>
>> As mentioned above, it really is a attempt to codify the 'flaky path'
>> scenario, where occasionaly errors are generated but I/O remains
>> possible. So it really is an overlay over the ANA states, as _any_
>> path might be affected.
>> This discussion only centered around 'optimal' paths as our path
>> selectors really only care about optimized paths; non-optimized
>> paths are not considered here.
>> Which might skew the view of this patchset somewhat.
> 
> Okay, but can we call it "degraded" instead of "marginal"? The latter
> implies the poor quality is endemic to that path rather than a temporary
> condition.

Sure we can.
(Although technically it _is_ endemic as it won't change without
user interaction. But I digress :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state()
  2025-07-22  6:41             ` Hannes Reinecke
@ 2025-07-23 16:58               ` John Meneghini
  0 siblings, 0 replies; 17+ messages in thread
From: John Meneghini @ 2025-07-23 16:58 UTC (permalink / raw)
  To: Hannes Reinecke, Keith Busch
  Cc: Bryan Gurney, linux-nvme, hch, sagi, axboe, james.smart,
	dick.kennedy, njavali, linux-scsi


On 7/22/25 2:41 AM, Hannes Reinecke wrote:
> On 7/22/25 04:57, Keith Busch wrote:
>> On Wed, Jul 16, 2025 at 08:07:51AM +0200, Hannes Reinecke wrote:
>>
>> Okay, but can we call it "degraded" instead of "marginal"? The latter
>> implies the poor quality is endemic to that path rather than a temporary
>> condition.
> 
> Sure we can.
> (Although technically it _is_ endemic as it won't change without
> user interaction. But I digress :-)
> \

OK. We'll change this from "marginal" to "degraded" and I'll have Bryan send a v9 patch series.

Is there anything else we want to change with this series before we do that?

/John



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-07-23 16:58 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-09 21:19 [PATCH v8 0/8] nvme-fc: FPIN link integrity handling Bryan Gurney
2025-07-09 21:19 ` [PATCH v8 1/8] fc_els: use 'union fc_tlv_desc' Bryan Gurney
2025-07-09 21:19 ` [PATCH v8 2/8] nvme: add NVME_CTRL_MARGINAL flag Bryan Gurney
2025-07-09 21:19 ` [PATCH v8 3/8] nvme-fc: marginal path handling Bryan Gurney
2025-07-09 21:19 ` [PATCH v8 4/8] nvme-fc: nvme_fc_fpin_rcv() callback Bryan Gurney
2025-07-09 21:19 ` [PATCH v8 5/8] lpfc: enable FPIN notification for NVMe Bryan Gurney
2025-07-09 21:19 ` [PATCH v8 6/8] qla2xxx: " Bryan Gurney
2025-07-09 22:01   ` John Meneghini
2025-07-09 21:19 ` [PATCH v8 7/8] nvme: sysfs: emit the marginal path state in show_state() Bryan Gurney
2025-07-09 22:12   ` Keith Busch
2025-07-15 19:42     ` John Meneghini
2025-07-15 20:03       ` Keith Busch
2025-07-16  6:07         ` Hannes Reinecke
2025-07-22  2:57           ` Keith Busch
2025-07-22  6:41             ` Hannes Reinecke
2025-07-23 16:58               ` John Meneghini
2025-07-09 22:05 ` [PATCH v8 0/8] nvme-fc: FPIN link integrity handling John Meneghini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).