* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
@ 2017-10-28 17:21 James Smart
2017-10-28 17:21 ` [PATCH 1/5] nvmet: call transport on subsystem add and delete James Smart
` (6 more replies)
0 siblings, 7 replies; 18+ messages in thread
From: James Smart @ 2017-10-28 17:21 UTC (permalink / raw)
A transport may have a transport-specific mechanism that can signal
when discovery controller content has changed and request a host
to reconnect to the discovery controller.
FC is such a transport. RSCNs may be generated by the FC port with
the discovery server, with the RSCNs then broadcast to the FC-NVME
hosts. A host, upon receiving the RSCN, would validate connectivity
then initiate a discovery controller rescan, allowing new subsystems
to be connected to or updating subsystem connectivity tables.
These patches:
- Modify the nvmet core layer to call a transport callback on every
subsystem add or remove from a transport port.
- Modify the nvmet-fc transport to support the callback, and add its
own internal lldd api to generate RSCN's via the lldd.
- Modify the lpfc driver to send/receive RSCNs for FC-NVME: transmit
the changed attribute RSCN on the target, receiving the RSCN on
the initiator and invoking the nvmet-fc transport rescan api.
Also adds manual sysfs mechanism to generate the RSCN on the target.
Dick Kennedy (1):
lpfc: Add sysfs interface to post NVME RSCN
James Smart (4):
nvmet: call transport on subsystem add and delete
nvmet_fc: support transport subsystem events
lpfc: Add support to generate RSCN events for nport
lpfc: Add NVME rescan support via RSCNs
drivers/nvme/target/configfs.c | 2 +
drivers/nvme/target/core.c | 10 ++++
drivers/nvme/target/fc.c | 10 ++++
drivers/nvme/target/nvmet.h | 2 +
drivers/scsi/lpfc/lpfc.h | 2 +
drivers/scsi/lpfc/lpfc_attr.c | 62 ++++++++++++++++++++
drivers/scsi/lpfc/lpfc_crtn.h | 4 ++
drivers/scsi/lpfc/lpfc_els.c | 118 +++++++++++++++++++++++++++++++++++++++
drivers/scsi/lpfc/lpfc_hbadisc.c | 35 ++++++++++++
drivers/scsi/lpfc/lpfc_hw.h | 9 +++
drivers/scsi/lpfc/lpfc_nvme.c | 42 ++++++++++++++
drivers/scsi/lpfc/lpfc_nvmet.c | 18 ++++++
drivers/scsi/lpfc/lpfc_sli.c | 1 +
include/linux/nvme-fc-driver.h | 6 ++
14 files changed, 321 insertions(+)
--
2.13.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/5] nvmet: call transport on subsystem add and delete
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
@ 2017-10-28 17:21 ` James Smart
2017-10-28 17:21 ` [PATCH 2/5] nvmet_fc: support transport subsystem events James Smart
` (5 subsequent siblings)
6 siblings, 0 replies; 18+ messages in thread
From: James Smart @ 2017-10-28 17:21 UTC (permalink / raw)
A transport may have a transport-specific mechanism that can signal
when discovery controller content has changed and request a host
to reconnect to the discovery controller.
FC is such a transport. RSCNs may be generated by the FC port with
the discovery server, with the RSCNs then broadcast to the FC-NVME
hosts. A host, upon receiving the RSCN, would validate connectivity
then initiate a discovery controller rescan, allowing new subsystems
to be connected to or updating subsystem connectivity tables.
Modify the subsystem link/unlink routines to always call a new
nvmet_port_subsystem_chg() routine whenever a new subsystem is
linked to a port or removed from a port.
The nvmet_port_subsystem_chg() routine will look to see if the
transport supports a discovery change callback, and if so, calls
the callback.
Signed-off-by: James Smart <james.smart at broadcom.com>
---
drivers/nvme/target/configfs.c | 2 ++
drivers/nvme/target/core.c | 10 ++++++++++
drivers/nvme/target/nvmet.h | 2 ++
3 files changed, 14 insertions(+)
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index b6aeb1d70951..38adcd7eafb1 100644
--- a/drivers/nvme/target/configfs.c
+++ b/drivers/nvme/target/configfs.c
@@ -503,6 +503,7 @@ static int nvmet_port_subsys_allow_link(struct config_item *parent,
list_add_tail(&link->entry, &port->subsystems);
nvmet_genctr++;
up_write(&nvmet_config_sem);
+ nvmet_port_subsystem_chg(port);
return 0;
out_free_link:
@@ -529,6 +530,7 @@ static void nvmet_port_subsys_drop_link(struct config_item *parent,
found:
list_del(&p->entry);
nvmet_genctr++;
+ nvmet_port_subsystem_chg(port);
if (list_empty(&port->subsystems))
nvmet_disable_port(port);
up_write(&nvmet_config_sem);
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index da088293eafc..f70d900774d0 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -206,6 +206,16 @@ void nvmet_disable_port(struct nvmet_port *port)
module_put(ops->owner);
}
+void nvmet_port_subsystem_chg(struct nvmet_port *port)
+{
+ struct nvmet_fabrics_ops *ops;
+
+ ops = nvmet_transports[port->disc_addr.trtype];
+
+ if (ops->discov_chg)
+ ops->discov_chg(port);
+}
+
static void nvmet_keep_alive_timer(struct work_struct *work)
{
struct nvmet_ctrl *ctrl = container_of(to_delayed_work(work),
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index e342f02845c1..87c7f6e84694 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -209,6 +209,7 @@ struct nvmet_fabrics_ops {
int (*add_port)(struct nvmet_port *port);
void (*remove_port)(struct nvmet_port *port);
void (*delete_ctrl)(struct nvmet_ctrl *ctrl);
+ void (*discov_chg)(struct nvmet_port *port);
};
#define NVMET_MAX_INLINE_BIOVEC 8
@@ -302,6 +303,7 @@ void nvmet_unregister_transport(struct nvmet_fabrics_ops *ops);
int nvmet_enable_port(struct nvmet_port *port);
void nvmet_disable_port(struct nvmet_port *port);
+void nvmet_port_subsystem_chg(struct nvmet_port *port);
void nvmet_referral_enable(struct nvmet_port *parent, struct nvmet_port *port);
void nvmet_referral_disable(struct nvmet_port *port);
--
2.13.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/5] nvmet_fc: support transport subsystem events
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
2017-10-28 17:21 ` [PATCH 1/5] nvmet: call transport on subsystem add and delete James Smart
@ 2017-10-28 17:21 ` James Smart
2017-10-28 17:21 ` [PATCH 3/5] lpfc: Add support to generate RSCN events for nport James Smart
` (4 subsequent siblings)
6 siblings, 0 replies; 18+ messages in thread
From: James Smart @ 2017-10-28 17:21 UTC (permalink / raw)
This patch adds support for transport events to signal discovery
controller changes.
Add new lldd api for a subsystem change. The lldd is to generate the
appropriate RSCN for the FC-NVME targetport.
Add transport op for discov_chg callback from the nvmet layer.
Signed-off-by: James Smart <james.smart at broadcom.com>
---
drivers/nvme/target/fc.c | 10 ++++++++++
include/linux/nvme-fc-driver.h | 6 ++++++
2 files changed, 16 insertions(+)
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 58e010bdda3e..590133c35d11 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -2546,6 +2546,15 @@ nvmet_fc_remove_port(struct nvmet_port *port)
nvmet_fc_tgtport_put(tgtport);
}
+static void
+nvmet_fc_port_subsys_event(struct nvmet_port *port)
+{
+ struct nvmet_fc_tgtport *tgtport = port->priv;
+
+ if (tgtport && tgtport->ops->nvme_subsystem_change)
+ tgtport->ops->nvme_subsystem_change(&tgtport->fc_target_port);
+}
+
static struct nvmet_fabrics_ops nvmet_fc_tgt_fcp_ops = {
.owner = THIS_MODULE,
.type = NVMF_TRTYPE_FC,
@@ -2554,6 +2563,7 @@ static struct nvmet_fabrics_ops nvmet_fc_tgt_fcp_ops = {
.remove_port = nvmet_fc_remove_port,
.queue_response = nvmet_fc_fcp_nvme_cmd_done,
.delete_ctrl = nvmet_fc_delete_ctrl,
+ .discov_chg = nvmet_fc_port_subsys_event,
};
static int __init nvmet_fc_init_module(void)
diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
index 2be4db353937..ac9d7f190650 100644
--- a/include/linux/nvme-fc-driver.h
+++ b/include/linux/nvme-fc-driver.h
@@ -344,6 +344,11 @@ struct nvme_fc_remote_port {
* indicating an FC transport Aborted status.
* Entrypoint is Mandatory.
*
+ * @nvme_subsystem_change: Called by the transport to generate FC state change
+ * notifications to NVME initiators. The state change notifications should
+ * cause the initiator to reconnect to the discovery controller on the
+ * targetport to look for new discovery log records.
+ *
* @max_hw_queues: indicates the maximum number of hw queues the LLDD
* supports for cpu affinitization.
* Value is Mandatory. Must be at least 1.
@@ -856,6 +861,7 @@ struct nvmet_fc_target_template {
struct nvmefc_tgt_fcp_req *fcpreq);
void (*defer_rcv)(struct nvmet_fc_target_port *tgtport,
struct nvmefc_tgt_fcp_req *fcpreq);
+ void (*nvme_subsystem_change)(struct nvmet_fc_target_port *tgtport);
u32 max_hw_queues;
u16 max_sgl_segments;
--
2.13.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 3/5] lpfc: Add support to generate RSCN events for nport
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
2017-10-28 17:21 ` [PATCH 1/5] nvmet: call transport on subsystem add and delete James Smart
2017-10-28 17:21 ` [PATCH 2/5] nvmet_fc: support transport subsystem events James Smart
@ 2017-10-28 17:21 ` James Smart
2017-10-28 17:21 ` [PATCH 4/5] lpfc: Add NVME rescan support via RSCNs James Smart
` (3 subsequent siblings)
6 siblings, 0 replies; 18+ messages in thread
From: James Smart @ 2017-10-28 17:21 UTC (permalink / raw)
This patch adds general Changed Attribute RSCN support, both transmit
and receive, to the driver. This support will be used with NVME to
request initiators to perform nvme discovery.
This patch:
adds RSCN definitions
adds statistics on RSCN transmissions
creates a routine that will transmit a Changed Attribute RSCN
Handle RSCN receipt
Signed-off-by: Dick Kennedy <dick.kennedy at broadcom.com>
Signed-off-by: James Smart <james.smart at broadcom.com>
---
drivers/scsi/lpfc/lpfc.h | 1 +
drivers/scsi/lpfc/lpfc_crtn.h | 2 +
drivers/scsi/lpfc/lpfc_els.c | 114 +++++++++++++++++++++++++++++++++++++++
drivers/scsi/lpfc/lpfc_hbadisc.c | 35 ++++++++++++
drivers/scsi/lpfc/lpfc_hw.h | 9 ++++
drivers/scsi/lpfc/lpfc_sli.c | 1 +
6 files changed, 162 insertions(+)
diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index 8eb3f96fe068..157788f0bc10 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -277,6 +277,7 @@ struct lpfc_stats {
uint32_t elsXmitADISC;
uint32_t elsXmitLOGO;
uint32_t elsXmitSCR;
+ uint32_t elsXmitRSCN;
uint32_t elsXmitRNID;
uint32_t elsXmitFARP;
uint32_t elsXmitFARPR;
diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index 7e300734b345..6657b71d5a57 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -142,6 +142,7 @@ int lpfc_issue_els_adisc(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_logo(struct lpfc_vport *, struct lpfc_nodelist *, uint8_t);
int lpfc_issue_els_npiv_logo(struct lpfc_vport *, struct lpfc_nodelist *);
int lpfc_issue_els_scr(struct lpfc_vport *, uint32_t, uint8_t);
+int lpfc_issue_els_rscn(struct lpfc_vport *vport, uint8_t retry);
int lpfc_issue_fabric_reglogin(struct lpfc_vport *);
int lpfc_els_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
int lpfc_ct_free_iocb(struct lpfc_hba *, struct lpfc_iocbq *);
@@ -357,6 +358,7 @@ void lpfc_mbox_timeout_handler(struct lpfc_hba *);
struct lpfc_nodelist *lpfc_findnode_did(struct lpfc_vport *, uint32_t);
struct lpfc_nodelist *lpfc_findnode_wwpn(struct lpfc_vport *,
struct lpfc_name *);
+struct lpfc_nodelist *lpfc_findnode_mapped(struct lpfc_vport *vport);
int lpfc_sli_issue_mbox_wait(struct lpfc_hba *, LPFC_MBOXQ_t *, uint32_t);
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index 468a66371de9..936fc22dfcae 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -2868,6 +2868,109 @@ lpfc_cmpl_els_cmd(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
}
/**
+ * lpfc_issue_els_rscn - Issue a rscn to an node on a vport
+ * @vport: pointer to a host virtual N_Port data structure.
+ * @retry: number of retries to the command IOCB.
+ *
+ * This routine issues a RSCN to a fabric node on a @vport.
+ * The remote node @nportid is passed into the function. It
+ * or Pt2Pt topologies, the first MAPPED node will get an RSCN
+ * notification. For Fabric topologies, the well known SCR_DID
+ * gets notified for broadcast to all zone members.
+ *
+ * Note that, in lpfc_prep_els_iocb() routine, the reference count of ndlp
+ * will be incremented by 1 for holding the ndlp and the reference to ndlp
+ * will be stored into the context1 field of the IOCB for the completion
+ * callback function to the RSCN ELS command.
+ *
+ * Return code
+ * 0 - Successfully issued RSCN command
+ * 1 - Failed to issue RSCN command
+ **/
+int
+lpfc_issue_els_rscn(struct lpfc_vport *vport, uint8_t retry)
+{
+ struct lpfc_hba *phba = vport->phba;
+ struct lpfc_iocbq *elsiocb;
+ uint8_t *pcmd;
+ uint16_t cmdsize;
+ uint32_t value, nportid;
+ struct lpfc_nodelist *ndlp;
+
+ /* Not supported for private loop */
+ if (phba->fc_topology == LPFC_TOPOLOGY_LOOP &&
+ !(vport->fc_flag & FC_PUBLIC_LOOP))
+ return 1;
+
+ cmdsize = (sizeof(uint32_t) + sizeof(SCR));
+ if (vport->fc_flag & FC_PT2PT) {
+ ndlp = lpfc_findnode_mapped(vport);
+ if (!ndlp)
+ return 1;
+ } else {
+ nportid = SCR_DID;
+
+ ndlp = lpfc_findnode_did(vport, nportid);
+ if (!ndlp) {
+ ndlp = lpfc_nlp_init(vport, nportid);
+ if (!ndlp)
+ return 1;
+ lpfc_enqueue_node(vport, ndlp);
+ } else if (!NLP_CHK_NODE_ACT(ndlp)) {
+ ndlp = lpfc_enable_node(vport, ndlp,
+ NLP_STE_UNUSED_NODE);
+ if (!ndlp)
+ return 1;
+ }
+ }
+ elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp,
+ ndlp->nlp_DID, ELS_CMD_RSCN_XMT);
+
+ if (!elsiocb) {
+ /* This will trigger the release of the node just
+ * allocated
+ */
+ lpfc_nlp_put(ndlp);
+ return 1;
+ }
+
+ pcmd = (uint8_t *)(((struct lpfc_dmabuf *)elsiocb->context2)->virt);
+
+ *((uint32_t *)(pcmd)) = ELS_CMD_RSCN_XMT;
+ pcmd += sizeof(uint32_t);
+
+ /* For SCR, remainder of payload is SCR parameter page */
+ memset(pcmd, 0, sizeof(struct fc_RSCN));
+ /* Generic Attribute of 0 is used */
+ value = vport->fc_myDID;
+
+ ((struct fc_RSCN *)pcmd)->NPort_ID = be32_to_cpu(value);
+
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+ "Issue RSCN: did:x%x",
+ ndlp->nlp_DID, 0, 0);
+
+ phba->fc_stat.elsXmitRSCN++;
+ elsiocb->iocb_cmpl = lpfc_cmpl_els_cmd;
+ if (lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0) ==
+ IOCB_ERROR) {
+ /* The additional lpfc_nlp_put will cause the following
+ * lpfc_els_free_iocb routine to trigger the rlease of
+ * the node.
+ */
+ lpfc_nlp_put(ndlp);
+ lpfc_els_free_iocb(phba, elsiocb);
+ return 1;
+ }
+ /* This will cause the callback-function lpfc_cmpl_els_cmd to
+ * trigger the release of node.
+ */
+
+ lpfc_nlp_put(ndlp);
+ return 0;
+}
+
+/**
* lpfc_issue_els_scr - Issue a scr to an node on a vport
* @vport: pointer to a host virtual N_Port data structure.
* @nportid: N_Port identifier to the remote node.
@@ -6071,6 +6174,16 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
fc_host_post_event(shost, fc_get_event_number(),
FCH_EVT_RSCN, lp[i]);
+ /* Check if RSCN is coming from a direct-connected remote NPort */
+ if (vport->fc_flag & FC_PT2PT) {
+ /* If so, just ACC it, no other action needed for now */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "2024 pt2pt RSCN %08x Data: x%x x%x\n",
+ *lp, vport->fc_flag, payload_len);
+ lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL);
+ return 0;
+ }
+
/* If we are about to begin discovery, just ACC the RSCN.
* Discovery processing will satisfy it.
*/
@@ -7886,6 +7999,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
elsiocb->context1 = lpfc_nlp_get(ndlp);
elsiocb->vport = vport;
+ /* Since RSCN can be varying lengths, lets mask that off */
if ((cmd & ELS_CMD_MASK) == ELS_CMD_RSCN) {
cmd &= ELS_CMD_MASK;
}
diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
index 20808349a80e..3eb37d5070c3 100644
--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
+++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
@@ -5119,6 +5119,41 @@ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
}
struct lpfc_nodelist *
+lpfc_findnode_mapped(struct lpfc_vport *vport)
+{
+ struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
+ struct lpfc_nodelist *ndlp;
+ uint32_t data1;
+ unsigned long iflags;
+
+ spin_lock_irqsave(shost->host_lock, iflags);
+
+ list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ if (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE ||
+ ndlp->nlp_state == NLP_STE_MAPPED_NODE) {
+ data1 = (((uint32_t)ndlp->nlp_state << 24) |
+ ((uint32_t)ndlp->nlp_xri << 16) |
+ ((uint32_t)ndlp->nlp_type << 8) |
+ ((uint32_t)ndlp->nlp_rpi & 0xff));
+ spin_unlock_irqrestore(shost->host_lock, iflags);
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+ "2025 FIND node DID "
+ "Data: x%p x%x x%x x%x %p\n",
+ ndlp, ndlp->nlp_DID,
+ ndlp->nlp_flag, data1,
+ ndlp->active_rrqs_xri_bitmap);
+ return ndlp;
+ }
+ }
+ spin_unlock_irqrestore(shost->host_lock, iflags);
+
+ /* FIND node did <did> NOT FOUND */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
+ "2026 FIND mapped did NOT FOUND.\n");
+ return NULL;
+}
+
+struct lpfc_nodelist *
lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
{
struct Scsi_Host *shost = lpfc_shost_from_vport(vport);
diff --git a/drivers/scsi/lpfc/lpfc_hw.h b/drivers/scsi/lpfc/lpfc_hw.h
index bdc1f184f67a..a1443592fa46 100644
--- a/drivers/scsi/lpfc/lpfc_hw.h
+++ b/drivers/scsi/lpfc/lpfc_hw.h
@@ -561,6 +561,8 @@ struct fc_vft_header {
/*
* Extended Link Service LS_COMMAND codes (Payload Word 0)
+ * The supported payload size is also hard-coded into this word
+ * when necessary.
*/
#ifdef __BIG_ENDIAN_BITFIELD
#define ELS_CMD_MASK 0xffff0000
@@ -597,6 +599,7 @@ struct fc_vft_header {
#define ELS_CMD_RPS 0x56000000
#define ELS_CMD_RPL 0x57000000
#define ELS_CMD_FAN 0x60000000
+#define ELS_CMD_RSCN_XMT 0x61040008
#define ELS_CMD_RSCN 0x61040000
#define ELS_CMD_SCR 0x62000000
#define ELS_CMD_RNID 0x78000000
@@ -637,6 +640,7 @@ struct fc_vft_header {
#define ELS_CMD_RPS 0x56
#define ELS_CMD_RPL 0x57
#define ELS_CMD_FAN 0x60
+#define ELS_CMD_RSCN_XMT 0x08000461
#define ELS_CMD_RSCN 0x0461
#define ELS_CMD_SCR 0x62
#define ELS_CMD_RNID 0x78
@@ -1088,6 +1092,11 @@ struct fc_lcb_res_frame {
uint16_t lcb_duration; /* LCB Payload Word 2, bit 15:0 */
};
+struct fc_RSCN { /* RSCN ELS frame */
+#define RSCN_EQ_PORT_ATTRIBUTE 2
+ uint32_t NPort_ID;
+};
+
/*
* Read Diagnostic Parameters (RDP) ELS frame.
*/
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index 8b119f87b51d..53a4ac67c9e1 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -8761,6 +8761,7 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
if (if_type == LPFC_SLI_INTF_IF_TYPE_2) {
if (pcmd && (*pcmd == ELS_CMD_FLOGI ||
*pcmd == ELS_CMD_SCR ||
+ *pcmd == ELS_CMD_RSCN_XMT ||
*pcmd == ELS_CMD_FDISC ||
*pcmd == ELS_CMD_LOGO ||
*pcmd == ELS_CMD_PLOGI)) {
--
2.13.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 4/5] lpfc: Add NVME rescan support via RSCNs
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
` (2 preceding siblings ...)
2017-10-28 17:21 ` [PATCH 3/5] lpfc: Add support to generate RSCN events for nport James Smart
@ 2017-10-28 17:21 ` James Smart
2017-10-28 17:21 ` [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN James Smart
` (2 subsequent siblings)
6 siblings, 0 replies; 18+ messages in thread
From: James Smart @ 2017-10-28 17:21 UTC (permalink / raw)
This patch adds NVME rescan support via RSCNs
NVME Target:
Adds support for the new nvme_subsystem_change callback. The
callback can be invoked by the nvmet_fc transport when it detects
conditions that would like to have nvme discovery invoked again.
Typically this is upon the addition of new subsystems added to
the nvmet configuration (thus the transport tying the callback
being invoked to port add calls).
The callback routine will generate an RSCN to the fabric in
order to cause RSCN events to all initiators with view of the
nport.
NVME Host:
Upon reception of an RSCN event for an nport which supports
NVME initiator and which is currently logged in and PRLI'd,
the initiator will call the nvme_fc transport
nvme_fc_rescan_remoteport() routine to request nvme discovery
to be performed again on the port pair.
Signed-off-by: Dick Kennedy <dick.kennedy at broadcom.com>
Signed-off-by: James Smart <james.smart at broadcom.com>
---
drivers/scsi/lpfc/lpfc_crtn.h | 2 ++
drivers/scsi/lpfc/lpfc_els.c | 4 ++++
drivers/scsi/lpfc/lpfc_nvme.c | 42 ++++++++++++++++++++++++++++++++++++++++++
drivers/scsi/lpfc/lpfc_nvmet.c | 18 ++++++++++++++++++
4 files changed, 66 insertions(+)
diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
index 6657b71d5a57..86f034fefa46 100644
--- a/drivers/scsi/lpfc/lpfc_crtn.h
+++ b/drivers/scsi/lpfc/lpfc_crtn.h
@@ -546,6 +546,8 @@ int lpfc_sli4_dump_page_a0(struct lpfc_hba *phba, struct lpfcMboxq *mbox);
void lpfc_mbx_cmpl_rdp_page_a0(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb);
/* NVME interfaces. */
+void lpfc_nvme_rescan_port(struct lpfc_vport *vport,
+ struct lpfc_nodelist *ndlp);
void lpfc_nvme_unregister_port(struct lpfc_vport *vport,
struct lpfc_nodelist *ndlp);
int lpfc_nvme_register_port(struct lpfc_vport *vport,
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index 936fc22dfcae..b02684113c75 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -6071,6 +6071,10 @@ lpfc_rscn_recovery_check(struct lpfc_vport *vport)
if (vport->phba->nvmet_support)
continue;
+ /* Check to see if we need to NVME rescan this remoteport. */
+ if (ndlp->nlp_fc4_type & NLP_FC4_NVME)
+ lpfc_nvme_rescan_port(vport, ndlp);
+
lpfc_disc_state_machine(vport, ndlp, NULL,
NLP_EVT_DEVICE_RECOVERY);
lpfc_cancel_retry_delay_tmo(vport, ndlp);
diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
index 23bdb1ca106e..b372da3672eb 100644
--- a/drivers/scsi/lpfc/lpfc_nvme.c
+++ b/drivers/scsi/lpfc/lpfc_nvme.c
@@ -2375,6 +2375,48 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
#endif
}
+/* lpfc_nvme_rescan_port - Check to see if we should rescan this remoteport
+ *
+ * If the ndlp represents an NVME Target, that we are logged into,
+ * ping the NVME FC Transport layer to initiate a device rescan
+ * on this remote NPort.
+ */
+void
+lpfc_nvme_rescan_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+{
+#if (IS_ENABLED(CONFIG_NVME_FC))
+ struct lpfc_nvme_rport *rport;
+ struct nvme_fc_remote_port *remoteport;
+
+ rport = ndlp->nrport;
+
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
+ "6170 Rescan NPort DID x%06x type x%x state x%x rport %p\n",
+ ndlp->nlp_DID, ndlp->nlp_type, ndlp->nlp_state, rport);
+ if (!rport)
+ goto input_err;
+ remoteport = rport->remoteport;
+ if (!remoteport)
+ goto input_err;
+
+ /* Only rescan if we are an NVME target in the MAPPED state */
+ if (remoteport->port_role & FC_PORT_ROLE_NVME_TARGET &&
+ ndlp->nlp_state == NLP_STE_MAPPED_NODE) {
+ nvme_fc_rescan_remoteport(remoteport);
+
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
+ "6172 NVME rescanned DID x%06x "
+ "port_state x%x\n",
+ ndlp->nlp_DID, remoteport->port_state);
+ }
+input_err:
+ return;
+#endif
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_DISC,
+ "6169 State error: lport %p, rport%p FCID x%06x\n",
+ vport->localport, ndlp->rport, ndlp->nlp_DID);
+}
+
/* lpfc_nvme_unregister_port - unbind the DID and port_role from this rport.
*
* There is no notion of Devloss or rport recovery from the current
diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
index 0b7c1a49e203..51888192eed7 100644
--- a/drivers/scsi/lpfc/lpfc_nvmet.c
+++ b/drivers/scsi/lpfc/lpfc_nvmet.c
@@ -866,6 +866,23 @@ lpfc_nvmet_defer_rcv(struct nvmet_fc_target_port *tgtport,
lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */
}
+static void
+lpfc_nvmet_subsystem_change(struct nvmet_fc_target_port *tgtport)
+{
+ struct lpfc_nvmet_tgtport *tgtp;
+ struct lpfc_hba *phba;
+ uint32_t rc;
+
+ tgtp = tgtport->private;
+ phba = tgtp->phba;
+
+ rc = lpfc_issue_els_rscn(phba->pport, 0);
+ lpfc_printf_log(phba, KERN_ERR, LOG_NVME,
+ "6420 NVMET subsystem change: "
+ "Notification %s\n",
+ (rc) ? "Failed" : "Sent");
+}
+
static struct nvmet_fc_target_template lpfc_tgttemplate = {
.targetport_delete = lpfc_nvmet_targetport_delete,
.xmt_ls_rsp = lpfc_nvmet_xmt_ls_rsp,
@@ -873,6 +890,7 @@ static struct nvmet_fc_target_template lpfc_tgttemplate = {
.fcp_abort = lpfc_nvmet_xmt_fcp_abort,
.fcp_req_release = lpfc_nvmet_xmt_fcp_release,
.defer_rcv = lpfc_nvmet_defer_rcv,
+ .nvme_subsystem_change = lpfc_nvmet_subsystem_change,
.max_hw_queues = 1,
.max_sgl_segments = LPFC_NVMET_DEFAULT_SEGS,
--
2.13.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
` (3 preceding siblings ...)
2017-10-28 17:21 ` [PATCH 4/5] lpfc: Add NVME rescan support via RSCNs James Smart
@ 2017-10-28 17:21 ` James Smart
2017-11-02 20:09 ` Ewan D. Milne
2017-10-29 16:11 ` [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan Sagi Grimberg
2017-11-01 15:36 ` Christoph Hellwig
6 siblings, 1 reply; 18+ messages in thread
From: James Smart @ 2017-10-28 17:21 UTC (permalink / raw)
From: Dick Kennedy <dick.kennedy@broadcom.com>
To support scenarios which aren't bound to nvmetcli add port scenarios,
which is currently where the transport invokes the nvme_subsystem_changed
callback, add a syfs attribute to lpfc which can be written to cause
an RSCN to be generated for the nport.
Signed-off-by: Dick Kennedy <dick.kennedy at broadcom.com>
Signed-off-by: James Smart <james.smart at broadcom.com>
---
drivers/scsi/lpfc/lpfc.h | 1 +
drivers/scsi/lpfc/lpfc_attr.c | 62 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 63 insertions(+)
diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index 157788f0bc10..52d403548a3a 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -781,6 +781,7 @@ struct lpfc_hba {
uint32_t cfg_use_msi;
uint32_t cfg_auto_imax;
uint32_t cfg_fcp_imax;
+ uint32_t cfg_force_rscn;
uint32_t cfg_fcp_cpu_map;
uint32_t cfg_fcp_io_channel;
uint32_t cfg_suppress_rsp;
diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
index c17677f494af..6e8e54e65e25 100644
--- a/drivers/scsi/lpfc/lpfc_attr.c
+++ b/drivers/scsi/lpfc/lpfc_attr.c
@@ -4469,6 +4469,66 @@ static DEVICE_ATTR(lpfc_req_fw_upgrade, S_IRUGO | S_IWUSR,
lpfc_request_firmware_upgrade_store);
/**
+ * lpfc_force_rscn_store
+ *
+ * @dev: class device that is converted into a Scsi_host.
+ * @attr: device attribute, not used.
+ * @buf: unused string
+ * @count: unused variable.
+ *
+ * Description:
+ * Force the switch to send a RSCN to all other NPorts in our zone
+ * If we are direct connect pt2pt, build the RSCN command ourself
+ * and send to the other NPort. Not supported for private loop.
+ *
+ * Returns:
+ * 0 - on success
+ * -EIO - if command is not sent
+ **/
+static ssize_t
+lpfc_force_rscn_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(dev);
+ struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+ int i;
+
+ i = lpfc_issue_els_rscn(vport, 0);
+ if (i)
+ return -EIO;
+ return strlen(buf);
+}
+
+/*
+ * lpfc_force_rscn: Force an RSCN to be sent to all remote NPorts
+ * connected to the HBA.
+ *
+ * Value range is any ascii value
+ */
+static int lpfc_force_rscn;
+module_param(lpfc_force_rscn, int, 0644);
+MODULE_PARM_DESC(lpfc_force_rscn,
+ "Force an RSCN to be sent to all remote NPorts");
+lpfc_param_show(force_rscn)
+
+/**
+ * lpfc_force_rscn_init - Force an RSCN to be sent to all remote NPorts
+ * @phba: lpfc_hba pointer.
+ * @val: unused value.
+ *
+ * Returns:
+ * zero if val saved.
+ **/
+static int
+lpfc_force_rscn_init(struct lpfc_hba *phba, int val)
+{
+ return 0;
+}
+
+static DEVICE_ATTR(lpfc_force_rscn, 0644,
+ lpfc_force_rscn_show, lpfc_force_rscn_store);
+
+/**
* lpfc_fcp_imax_store
*
* @dev: class device that is converted into a Scsi_host.
@@ -5218,6 +5278,7 @@ struct device_attribute *lpfc_hba_attrs[] = {
&dev_attr_lpfc_nvme_oas,
&dev_attr_lpfc_auto_imax,
&dev_attr_lpfc_fcp_imax,
+ &dev_attr_lpfc_force_rscn,
&dev_attr_lpfc_fcp_cpu_map,
&dev_attr_lpfc_fcp_io_channel,
&dev_attr_lpfc_suppress_rsp,
@@ -6238,6 +6299,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
lpfc_nvme_oas_init(phba, lpfc_nvme_oas);
lpfc_auto_imax_init(phba, lpfc_auto_imax);
lpfc_fcp_imax_init(phba, lpfc_fcp_imax);
+ lpfc_force_rscn_init(phba, lpfc_force_rscn);
lpfc_fcp_cpu_map_init(phba, lpfc_fcp_cpu_map);
lpfc_enable_hba_reset_init(phba, lpfc_enable_hba_reset);
lpfc_enable_hba_heartbeat_init(phba, lpfc_enable_hba_heartbeat);
--
2.13.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
` (4 preceding siblings ...)
2017-10-28 17:21 ` [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN James Smart
@ 2017-10-29 16:11 ` Sagi Grimberg
2017-10-30 4:43 ` James Smart
2017-11-01 15:36 ` Christoph Hellwig
6 siblings, 1 reply; 18+ messages in thread
From: Sagi Grimberg @ 2017-10-29 16:11 UTC (permalink / raw)
James,
> A transport may have a transport-specific mechanism that can signal
> when discovery controller content has changed and request a host
> to reconnect to the discovery controller.
>
> FC is such a transport. RSCNs may be generated by the FC port with
> the discovery server, with the RSCNs then broadcast to the FC-NVME
> hosts. A host, upon receiving the RSCN, would validate connectivity
> then initiate a discovery controller rescan, allowing new subsystems
> to be connected to or updating subsystem connectivity tables.
How does this fit with possible future discovery enhancements discussed
in nvme TWG right now? The notification for discovery records change
will not be transport specific in the future.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-10-29 16:11 ` [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan Sagi Grimberg
@ 2017-10-30 4:43 ` James Smart
2017-11-01 15:38 ` Christoph Hellwig
0 siblings, 1 reply; 18+ messages in thread
From: James Smart @ 2017-10-30 4:43 UTC (permalink / raw)
On 10/29/2017 9:11 AM, Sagi Grimberg wrote:
> James,
>
>> A transport may have a transport-specific mechanism that can signal
>> when discovery controller content has changed and request a host
>> to reconnect to the discovery controller.
>>
>> FC is such a transport. RSCNs may be generated by the FC port with
>> the discovery server, with the RSCNs then broadcast to the FC-NVME
>> hosts. A host, upon receiving the RSCN, would validate connectivity
>> then initiate a discovery controller rescan, allowing new subsystems
>> to be connected to or updating subsystem connectivity tables.
>
> How does this fit with possible future discovery enhancements discussed
> in nvme TWG right now? The notification for discovery records change
> will not be transport specific in the future.
Its independent yet can coexist very well.
Currently, the FC-NVME standard strongly suggests a discovery controller
on each target. Having this simple notification avoids lots of
long-lived connections and rescans are done only when/where needed -
with no change in any standard. It works very well on current products.?
There's nothing preventing it coexisting with long-lived discovery
controller connections and other configurations of discovery servers.
I don't buy into your last statement yet. Long lived discovery
connections are fine and an admin can already configure a set of systems
as they describe. I also expect any attempt to mandate centralized
discovery controllers to have a fair number of issues to be worked out
not just with hosts but with targets and cross-fabric issues and I
believe there will be pushback to give complete control of presentation
to third party entity.? In some respects, the history of iSNS/SLP for
iSCSI gave a brief example of trying to go down this path.
-- james
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
` (5 preceding siblings ...)
2017-10-29 16:11 ` [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan Sagi Grimberg
@ 2017-11-01 15:36 ` Christoph Hellwig
2017-11-01 15:55 ` James Smart
6 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2017-11-01 15:36 UTC (permalink / raw)
On Sat, Oct 28, 2017@10:21:09AM -0700, James Smart wrote:
> A transport may have a transport-specific mechanism that can signal
> when discovery controller content has changed and request a host
> to reconnect to the discovery controller.
Can you point to the part of the NVMe spec that allows this?
While the FC transport obviously could do it there is nothing in the
NVMe architecture model documenting this behavior.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-10-30 4:43 ` James Smart
@ 2017-11-01 15:38 ` Christoph Hellwig
2017-11-01 16:03 ` James Smart
0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2017-11-01 15:38 UTC (permalink / raw)
On Sun, Oct 29, 2017@09:43:14PM -0700, James Smart wrote:
> Currently, the FC-NVME standard strongly suggests a discovery controller on
> each target.
Yikes. This is a new requirement not backed by anything in NVMe or
NVMeoF itself. Please have discussion in the technical working group
on this behavior first.
It's not that I'm against it - but I really want this sort of optional
transport behavior clearly documented in the actual NVMe spec first.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-11-01 15:36 ` Christoph Hellwig
@ 2017-11-01 15:55 ` James Smart
2017-11-01 15:57 ` Christoph Hellwig
0 siblings, 1 reply; 18+ messages in thread
From: James Smart @ 2017-11-01 15:55 UTC (permalink / raw)
On 11/1/2017 8:36 AM, Christoph Hellwig wrote:
> On Sat, Oct 28, 2017@10:21:09AM -0700, James Smart wrote:
>> A transport may have a transport-specific mechanism that can signal
>> when discovery controller content has changed and request a host
>> to reconnect to the discovery controller.
>
> Can you point to the part of the NVMe spec that allows this?
> While the FC transport obviously could do it there is nothing in the
> NVMe architecture model documenting this behavior.
There will not be anything in the NVMe base spec about such a thing as
it's pci, nor will there be anything in the NVMe Fabrics spec as nothing
about this changes or alters nvmf discovery. It can and is fully
documented in the transport spec which has been visible in drafts for
almost a year. It's not disallowed simply because an arch section
doesn't mention it. It also falls under the "method that a host uses to
obtain the information necessary to connect to the initial Discovery
Service is implementation specific" - but in this case, the transport,
in a published way, is making that implementation easier. I don't know
what you're trying to say.
-- james
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-11-01 15:55 ` James Smart
@ 2017-11-01 15:57 ` Christoph Hellwig
2017-11-01 16:12 ` James Smart
0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2017-11-01 15:57 UTC (permalink / raw)
On Wed, Nov 01, 2017@08:55:36AM -0700, James Smart wrote:
> There will not be anything in the NVMe base spec about such a thing as it's
> pci, nor will there be anything in the NVMe Fabrics spec as nothing about
> this changes or alters nvmf discovery. It can and is fully documented in the
> transport spec which has been visible in drafts for almost a year. It's not
> disallowed simply because an arch section doesn't mention it. It also falls
> under the "method that a host uses to obtain the information necessary to
> connect to the initial Discovery Service is implementation specific" - but
> in this case, the transport, in a published way, is making that
> implementation easier. I don't know what you're trying to say.
You'll need to discuss this with the technical working group or we won't
support it in Linux. I'm getting sick and tired of the FC smokey
backroom secret sauce.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-11-01 15:38 ` Christoph Hellwig
@ 2017-11-01 16:03 ` James Smart
2017-11-01 16:28 ` Christoph Hellwig
0 siblings, 1 reply; 18+ messages in thread
From: James Smart @ 2017-11-01 16:03 UTC (permalink / raw)
On 11/1/2017 8:38 AM, Christoph Hellwig wrote:
> On Sun, Oct 29, 2017@09:43:14PM -0700, James Smart wrote:
>> Currently, the FC-NVME standard strongly suggests a discovery controller on
>> each target.
>
> Yikes. This is a new requirement not backed by anything in NVMe or
> NVMeoF itself. Please have discussion in the technical working group
> on this behavior first.
>
> It's not that I'm against it - but I really want this sort of optional
> transport behavior clearly documented in the actual NVMe spec first.
>
See my last email.
The "strongly suggest" is laymans language for the word "should" that
was used in the standard. Should was used so that discovery of the
available discovery controllers on FC could be done in an automatic and
dynamic way without apriori knowledge. Any FC target device is free to
choose if and where they implement discovery controllers, just as on any
other transport in nvmf.
We have had discussions in the technical working group and with nvme
leadership on this and as I said, its well within what a transport can
do. It's doing nothing illegal or odd. It doesn't require explicit
documentation in a NVMe spec. It is in a NVME fabrics-based transport
specification.
-- james
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-11-01 15:57 ` Christoph Hellwig
@ 2017-11-01 16:12 ` James Smart
2017-11-01 16:29 ` Christoph Hellwig
0 siblings, 1 reply; 18+ messages in thread
From: James Smart @ 2017-11-01 16:12 UTC (permalink / raw)
On 11/1/2017 8:57 AM, Christoph Hellwig wrote:
> On Wed, Nov 01, 2017@08:55:36AM -0700, James Smart wrote:
>> There will not be anything in the NVMe base spec about such a thing as it's
>> pci, nor will there be anything in the NVMe Fabrics spec as nothing about
>> this changes or alters nvmf discovery. It can and is fully documented in the
>> transport spec which has been visible in drafts for almost a year. It's not
>> disallowed simply because an arch section doesn't mention it. It also falls
>> under the "method that a host uses to obtain the information necessary to
>> connect to the initial Discovery Service is implementation specific" - but
>> in this case, the transport, in a published way, is making that
>> implementation easier. I don't know what you're trying to say.
>
> You'll need to discuss this with the technical working group or we won't
> support it in Linux. I'm getting sick and tired of the FC smokey
> backroom secret sauce.
>
Seriously ? It is in a published transport spec that was reviewed by
working group leadership and members.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-11-01 16:03 ` James Smart
@ 2017-11-01 16:28 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2017-11-01 16:28 UTC (permalink / raw)
On Wed, Nov 01, 2017@09:03:37AM -0700, James Smart wrote:
> The "strongly suggest" is laymans language for the word "should" that was
> used in the standard. Should was used so that discovery of the available
> discovery controllers on FC could be done in an automatic and dynamic way
> without apriori knowledge. Any FC target device is free to choose if and
> where they implement discovery controllers, just as on any other transport
> in nvmf.
>
> We have had discussions in the technical working group and with nvme
> leadership on this and as I said, its well within what a transport can do.
> It's doing nothing illegal or odd. It doesn't require explicit documentation
> in a NVMe spec. It is in a NVME fabrics-based transport specification.
Please send a TP against Section 5 of the NVMeoF spec to add your
transport specific re-discovery notifications.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan
2017-11-01 16:12 ` James Smart
@ 2017-11-01 16:29 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2017-11-01 16:29 UTC (permalink / raw)
On Wed, Nov 01, 2017@09:12:00AM -0700, James Smart wrote:
> Seriously ? It is in a published transport spec that was reviewed by
> working group leadership and members.
NVMe-FC has _never_ been reviewed or even posted to the NVMe technical
working group list. Which is a big part of all the problems it has been
causing. The whole idea of doing a NVMe transport outside of NVMe has
been nothing but a nightmare.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN
2017-10-28 17:21 ` [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN James Smart
@ 2017-11-02 20:09 ` Ewan D. Milne
2017-11-02 22:02 ` James Smart
0 siblings, 1 reply; 18+ messages in thread
From: Ewan D. Milne @ 2017-11-02 20:09 UTC (permalink / raw)
On Sat, 2017-10-28@10:21 -0700, James Smart wrote:
> From: Dick Kennedy <dick.kennedy at broadcom.com>
>
> To support scenarios which aren't bound to nvmetcli add port scenarios,
> which is currently where the transport invokes the nvme_subsystem_changed
> callback, add a syfs attribute to lpfc which can be written to cause
> an RSCN to be generated for the nport.
>
> Signed-off-by: Dick Kennedy <dick.kennedy at broadcom.com>
> Signed-off-by: James Smart <james.smart at broadcom.com>
> ---
> drivers/scsi/lpfc/lpfc.h | 1 +
> drivers/scsi/lpfc/lpfc_attr.c | 62 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 63 insertions(+)
>
> diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
> index 157788f0bc10..52d403548a3a 100644
> --- a/drivers/scsi/lpfc/lpfc.h
> +++ b/drivers/scsi/lpfc/lpfc.h
> @@ -781,6 +781,7 @@ struct lpfc_hba {
> uint32_t cfg_use_msi;
> uint32_t cfg_auto_imax;
> uint32_t cfg_fcp_imax;
> + uint32_t cfg_force_rscn;
> uint32_t cfg_fcp_cpu_map;
> uint32_t cfg_fcp_io_channel;
> uint32_t cfg_suppress_rsp;
> diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
> index c17677f494af..6e8e54e65e25 100644
> --- a/drivers/scsi/lpfc/lpfc_attr.c
> +++ b/drivers/scsi/lpfc/lpfc_attr.c
> @@ -4469,6 +4469,66 @@ static DEVICE_ATTR(lpfc_req_fw_upgrade, S_IRUGO | S_IWUSR,
> lpfc_request_firmware_upgrade_store);
>
> /**
> + * lpfc_force_rscn_store
> + *
> + * @dev: class device that is converted into a Scsi_host.
> + * @attr: device attribute, not used.
> + * @buf: unused string
> + * @count: unused variable.
> + *
> + * Description:
> + * Force the switch to send a RSCN to all other NPorts in our zone
> + * If we are direct connect pt2pt, build the RSCN command ourself
> + * and send to the other NPort. Not supported for private loop.
> + *
> + * Returns:
> + * 0 - on success
> + * -EIO - if command is not sent
> + **/
> +static ssize_t
> +lpfc_force_rscn_store(struct device *dev, struct device_attribute *attr,
> + const char *buf, size_t count)
> +{
> + struct Scsi_Host *shost = class_to_shost(dev);
> + struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
> + int i;
> +
> + i = lpfc_issue_els_rscn(vport, 0);
> + if (i)
> + return -EIO;
> + return strlen(buf);
> +}
> +
> +/*
> + * lpfc_force_rscn: Force an RSCN to be sent to all remote NPorts
> + * connected to the HBA.
> + *
> + * Value range is any ascii value
> + */
> +static int lpfc_force_rscn;
> +module_param(lpfc_force_rscn, int, 0644);
> +MODULE_PARM_DESC(lpfc_force_rscn,
> + "Force an RSCN to be sent to all remote NPorts");
> +lpfc_param_show(force_rscn)
> +
> +/**
> + * lpfc_force_rscn_init - Force an RSCN to be sent to all remote NPorts
> + * @phba: lpfc_hba pointer.
> + * @val: unused value.
> + *
> + * Returns:
> + * zero if val saved.
> + **/
> +static int
> +lpfc_force_rscn_init(struct lpfc_hba *phba, int val)
> +{
> + return 0;
> +}
> +
> +static DEVICE_ATTR(lpfc_force_rscn, 0644,
> + lpfc_force_rscn_show, lpfc_force_rscn_store);
> +
> +/**
> * lpfc_fcp_imax_store
> *
> * @dev: class device that is converted into a Scsi_host.
> @@ -5218,6 +5278,7 @@ struct device_attribute *lpfc_hba_attrs[] = {
> &dev_attr_lpfc_nvme_oas,
> &dev_attr_lpfc_auto_imax,
> &dev_attr_lpfc_fcp_imax,
> + &dev_attr_lpfc_force_rscn,
> &dev_attr_lpfc_fcp_cpu_map,
> &dev_attr_lpfc_fcp_io_channel,
> &dev_attr_lpfc_suppress_rsp,
> @@ -6238,6 +6299,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
> lpfc_nvme_oas_init(phba, lpfc_nvme_oas);
> lpfc_auto_imax_init(phba, lpfc_auto_imax);
> lpfc_fcp_imax_init(phba, lpfc_fcp_imax);
> + lpfc_force_rscn_init(phba, lpfc_force_rscn);
> lpfc_fcp_cpu_map_init(phba, lpfc_fcp_cpu_map);
> lpfc_enable_hba_reset_init(phba, lpfc_enable_hba_reset);
> lpfc_enable_hba_heartbeat_init(phba, lpfc_enable_hba_heartbeat);
So, with this sysfs interface, if someone accidentally or intentionally
issues RSCNs in a loop at high frequency, what is this going to do to
the FC fabric?
Most people know to use single-initiator zoning, but the affected target
ports would still get the RSCN, correct?
-Ewan
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN
2017-11-02 20:09 ` Ewan D. Milne
@ 2017-11-02 22:02 ` James Smart
0 siblings, 0 replies; 18+ messages in thread
From: James Smart @ 2017-11-02 22:02 UTC (permalink / raw)
On 11/2/2017 1:09 PM, Ewan D. Milne wrote:
>
> So, with this sysfs interface, if someone accidentally or intentionally
> issues RSCNs in a loop at high frequency, what is this going to do to
> the FC fabric?
>
> Most people know to use single-initiator zoning, but the affected target
> ports would still get the RSCN, correct?
>
> -Ewan
>
>
It's a moot point right now, but...
It generates some traffic of course. They are small single-frame
transmissions, so not a lot. If switched, the switch will only send them
to ports that have registered for the event type, an attribute change,
and then again only to the initiators within the same zone(s) as the
target. Target ports would not receive them.? So should be limited to
the initiators that can view the target.
-- james
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2017-11-02 22:02 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-28 17:21 [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan James Smart
2017-10-28 17:21 ` [PATCH 1/5] nvmet: call transport on subsystem add and delete James Smart
2017-10-28 17:21 ` [PATCH 2/5] nvmet_fc: support transport subsystem events James Smart
2017-10-28 17:21 ` [PATCH 3/5] lpfc: Add support to generate RSCN events for nport James Smart
2017-10-28 17:21 ` [PATCH 4/5] lpfc: Add NVME rescan support via RSCNs James Smart
2017-10-28 17:21 ` [PATCH 5/5] lpfc: Add sysfs interface to post NVME RSCN James Smart
2017-11-02 20:09 ` Ewan D. Milne
2017-11-02 22:02 ` James Smart
2017-10-29 16:11 ` [PATCH 0/5] nvmet/nvmet_fc: add events for discovery controller rescan Sagi Grimberg
2017-10-30 4:43 ` James Smart
2017-11-01 15:38 ` Christoph Hellwig
2017-11-01 16:03 ` James Smart
2017-11-01 16:28 ` Christoph Hellwig
2017-11-01 15:36 ` Christoph Hellwig
2017-11-01 15:55 ` James Smart
2017-11-01 15:57 ` Christoph Hellwig
2017-11-01 16:12 ` James Smart
2017-11-01 16:29 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).