linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver
@ 2017-06-21 20:48 Madhani, Himanshu
  2017-06-21 20:48 ` [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling Madhani, Himanshu
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Himanshu Madhani <himanshu.madhani@cavium.com>

Hi Martin,

This patch series adds NVMe FC fabric support for qla2xxx initiator mode
driver.

This series depends on the target multiqueue series that was sent out on
June 13,2017. (https://www.spinics.net/lists/linux-scsi/msg109827.html)

There are couple of new files qla_nvme.c and qla_nvme.h created to add 
the changes needed for registration to NVMe FC transport template as well
as error handling logic.

Patch 1 adds NVMe bits to various driver resources to help with NVMe remote
port discovery and PRLI handling in the driver.

Patch 2 addes NVMe command handling in driver. 

Patch 3 has bulk of NVMe changes which handles NVMe support based on a module
paramter which is used for firmware initialization and NVMe transport 
registration. All the logic to handle NVMe command and error handling is also
included in qla_nvme.c file.

Patch 4 and 5 are trivial changes to FDMI registration to allow NVMe FC-4 type
to switch management server.

Please apply this series to for-next for inclusion in 4.13 merge window.

Note: Patch 2 does not compile due to change which are part of patch 3 needed
      for sucessful compilation. Please apply patch 1-6 to be able to get
      commpilable driver. 

Changes from v1 --> v2

o Addressed review comments by Johannes and James Smart.
o Added Reviewed-by tags wherever applicable.
o Added Commit log for patches where applicable.
o Removed qla_nvme_hba_scan() as it turns out to be dead code until auto
  discovery mechanism is implemeted in FC-NVMe.
o Remove un-needed while loop in qla_nvme_delete().

Thanks,
Himanshu 

Duane Grigsby (5):
  qla2xxx: Add FC-NVMe port discovery and PRLI handling
  qla2xxx: Add FC-NVMe command handling
  qla2xxx: Add FC-NVMe F/W initialization and transport registration
  qla2xxx: Send FC4 type NVMe to the management server
  qla2xxx: Use FC-NMVe FC4 type for FDMI registration

Himanshu Madhani (1):
  qla2xxx: Update Driver version to 10.00.00.00-k

 drivers/scsi/qla2xxx/Makefile      |   2 +-
 drivers/scsi/qla2xxx/qla_dbg.c     |   9 +-
 drivers/scsi/qla2xxx/qla_def.h     |  54 ++-
 drivers/scsi/qla2xxx/qla_fw.h      |  35 +-
 drivers/scsi/qla2xxx/qla_gbl.h     |  18 +-
 drivers/scsi/qla2xxx/qla_gs.c      | 134 ++++++-
 drivers/scsi/qla2xxx/qla_init.c    | 187 ++++++++-
 drivers/scsi/qla2xxx/qla_iocb.c    |  57 +++
 drivers/scsi/qla2xxx/qla_isr.c     |  98 +++++
 drivers/scsi/qla2xxx/qla_mbx.c     |  54 ++-
 drivers/scsi/qla2xxx/qla_nvme.c    | 756 +++++++++++++++++++++++++++++++++++++
 drivers/scsi/qla2xxx/qla_nvme.h    | 132 +++++++
 drivers/scsi/qla2xxx/qla_os.c      |  60 ++-
 drivers/scsi/qla2xxx/qla_target.c  |   4 +-
 drivers/scsi/qla2xxx/qla_version.h |   6 +-
 15 files changed, 1559 insertions(+), 47 deletions(-)
 create mode 100644 drivers/scsi/qla2xxx/qla_nvme.c
 create mode 100644 drivers/scsi/qla2xxx/qla_nvme.h

-- 
2.12.0

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
@ 2017-06-21 20:48 ` Madhani, Himanshu
  2017-06-22  6:28   ` Hannes Reinecke
  2017-06-28 21:15   ` James Bottomley
  2017-06-21 20:48 ` [PATCH v2 2/6] qla2xxx: Add FC-NVMe command handling Madhani, Himanshu
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Duane Grigsby <duane.grigsby@cavium.com>

Added logic to change the login process into an optional PRIL
step for FC-NVMe ports as a separate operation, such that we can
change type to 0x28 (NVMe).

Currently, the driver performs the PLOGI/PRLI together as one
operation, but if the discovered port is an NVMe port then we
first issue the PLOGI and then we issue the PRLI. Also, the
fabric discovery logic was changed to mark each discovered FC
NVMe port, so that we can register them with the FC-NVMe transport
later.

Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/scsi/qla2xxx/qla_dbg.c    |   9 +-
 drivers/scsi/qla2xxx/qla_def.h    |  30 ++++++-
 drivers/scsi/qla2xxx/qla_fw.h     |  13 ++-
 drivers/scsi/qla2xxx/qla_gbl.h    |   1 +
 drivers/scsi/qla2xxx/qla_init.c   | 168 ++++++++++++++++++++++++++++++++++++--
 drivers/scsi/qla2xxx/qla_iocb.c   |  21 +++++
 drivers/scsi/qla2xxx/qla_mbx.c    |  33 +++++---
 drivers/scsi/qla2xxx/qla_os.c     |   4 +
 drivers/scsi/qla2xxx/qla_target.c |   4 +-
 9 files changed, 256 insertions(+), 27 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
index c0c90dcc7c7b..cf4f47603a91 100644
--- a/drivers/scsi/qla2xxx/qla_dbg.c
+++ b/drivers/scsi/qla2xxx/qla_dbg.c
@@ -15,9 +15,10 @@
  * |                              |                    | 0x015b-0x0160	|
  * |                              |                    | 0x016e		|
  * | Mailbox commands             |       0x1199       | 0x1193		|
- * | Device Discovery             |       0x2004       | 0x2016		|
- * |                              |                    | 0x2011-0x2012, |
- * |                              |                    | 0x2099-0x20a4  |
+ * | Device Discovery             |       0x2131       | 0x210e-0x2116  |
+ * |				  | 		       | 0x211a         |
+ * |                              |                    | 0x211c-0x2128  |
+ * |                              |                    | 0x212a-0x2130  |
  * | Queue Command and IO tracing |       0x3074       | 0x300b         |
  * |                              |                    | 0x3027-0x3028  |
  * |                              |                    | 0x303d-0x3041  |
@@ -59,7 +60,7 @@
  * |                              |                    | 0xb13c-0xb140  |
  * |                              |                    | 0xb149		|
  * | MultiQ                       |       0xc010       |		|
- * | Misc                         |       0xd301       | 0xd031-0xd0ff	|
+ * | Misc                         |       0xd302       | 0xd031-0xd0ff	|
  * |                              |                    | 0xd101-0xd1fe	|
  * |                              |                    | 0xd214-0xd2fe	|
  * | Target Mode		  |	  0xe081       |		|
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index e1af9db3691d..6f8df9cea8ff 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -343,6 +343,7 @@ struct srb_iocb {
 #define SRB_LOGIN_RETRIED	BIT_0
 #define SRB_LOGIN_COND_PLOGI	BIT_1
 #define SRB_LOGIN_SKIP_PRLI	BIT_2
+#define SRB_LOGIN_NVME_PRLI	BIT_3
 			uint16_t data[2];
 			u32 iop[2];
 		} logio;
@@ -436,6 +437,7 @@ struct srb_iocb {
 #define SRB_NACK_PLOGI	16
 #define SRB_NACK_PRLI	17
 #define SRB_NACK_LOGO	18
+#define SRB_PRLI_CMD	21
 
 enum {
 	TYPE_SRB,
@@ -1088,6 +1090,7 @@ struct mbx_cmd_32 {
 #define	MBX_1		BIT_1
 #define	MBX_0		BIT_0
 
+#define RNID_TYPE_PORT_LOGIN	0x7
 #define RNID_TYPE_SET_VERSION	0x9
 #define RNID_TYPE_ASIC_TEMP	0xC
 
@@ -2152,6 +2155,7 @@ typedef struct {
 	uint8_t fabric_port_name[WWN_SIZE];
 	uint16_t fp_speed;
 	uint8_t fc4_type;
+	uint8_t fc4f_nvme;	/* nvme fc4 feature bits */
 } sw_info_t;
 
 /* FCP-4 types */
@@ -2180,7 +2184,8 @@ typedef enum {
 	FCT_SWITCH,
 	FCT_BROADCAST,
 	FCT_INITIATOR,
-	FCT_TARGET
+	FCT_TARGET,
+	FCT_NVME
 } fc_port_type_t;
 
 enum qla_sess_deletion {
@@ -2237,10 +2242,12 @@ enum fcport_mgt_event {
 	FCME_RSCN,
 	FCME_GIDPN_DONE,
 	FCME_PLOGI_DONE,	/* Initiator side sent LLIOCB */
+	FCME_PRLI_DONE,
 	FCME_GNL_DONE,
 	FCME_GPSC_DONE,
 	FCME_GPDB_DONE,
 	FCME_GPNID_DONE,
+	FCME_GFFID_DONE,
 	FCME_DELETE_DONE,
 };
 
@@ -2274,6 +2281,16 @@ typedef struct fc_port {
 	unsigned int login_pause:1;
 	unsigned int login_succ:1;
 
+	struct work_struct nvme_del_work;
+	atomic_t nvme_ref_count;
+	uint32_t nvme_prli_service_param;
+#define NVME_PRLI_SP_CONF       BIT_7
+#define NVME_PRLI_SP_INITIATOR  BIT_5
+#define NVME_PRLI_SP_TARGET     BIT_4
+#define NVME_PRLI_SP_DISCOVERY  BIT_3
+	uint8_t nvme_flag;
+#define NVME_FLAG_REGISTERED 4
+
 	struct fc_port *conflict;
 	unsigned char logout_completed;
 	int generation;
@@ -2306,6 +2323,7 @@ typedef struct fc_port {
 	u32 supported_classes;
 
 	uint8_t fc4_type;
+	uint8_t	fc4f_nvme;
 	uint8_t scan_state;
 
 	unsigned long last_queue_full;
@@ -2313,6 +2331,8 @@ typedef struct fc_port {
 
 	uint16_t port_id;
 
+	struct nvme_fc_remote_port *nvme_remote_port;
+
 	unsigned long retry_delay_timestamp;
 	struct qla_tgt_sess *tgt_session;
 	struct ct_sns_desc ct_desc;
@@ -2745,7 +2765,7 @@ struct ct_sns_req {
 
 		struct {
 			uint8_t reserved;
-			uint8_t port_name[3];
+			uint8_t port_id[3];
 		} gff_id;
 
 		struct {
@@ -3052,6 +3072,7 @@ enum qla_work_type {
 	QLA_EVT_GPNID_DONE,
 	QLA_EVT_NEW_SESS,
 	QLA_EVT_GPDB,
+	QLA_EVT_PRLI,
 	QLA_EVT_GPSC,
 	QLA_EVT_UPD_FCPORT,
 	QLA_EVT_GNL,
@@ -4007,6 +4028,7 @@ typedef struct scsi_qla_host {
 		uint32_t	qpairs_available:1;
 		uint32_t	qpairs_req_created:1;
 		uint32_t	qpairs_rsp_created:1;
+		uint32_t	nvme_enabled:1;
 	} flags;
 
 	atomic_t	loop_state;
@@ -4085,6 +4107,10 @@ typedef struct scsi_qla_host {
 	uint8_t		port_name[WWN_SIZE];
 	uint8_t		fabric_node_name[WWN_SIZE];
 
+	struct		nvme_fc_local_port *nvme_local_port;
+	atomic_t	nvme_ref_count;
+	struct list_head nvme_rport_list;
+
 	uint16_t	fcoe_vlan_id;
 	uint16_t	fcoe_fcf_idx;
 	uint8_t		fcoe_vn_port_mac[6];
diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h
index 1f808928763b..dcae62d4cbeb 100644
--- a/drivers/scsi/qla2xxx/qla_fw.h
+++ b/drivers/scsi/qla2xxx/qla_fw.h
@@ -37,6 +37,12 @@ struct port_database_24xx {
 #define PDF_CLASS_2		BIT_4
 #define PDF_HARD_ADDR		BIT_1
 
+	/*
+	 * for NVMe, the login_state field has been
+	 * split into nibbles.
+	 * The lower nibble is for FCP.
+	 * The upper nibble is for NVMe.
+	 */
 	uint8_t current_login_state;
 	uint8_t last_login_state;
 #define PDS_PLOGI_PENDING	0x03
@@ -69,7 +75,11 @@ struct port_database_24xx {
 	uint8_t port_name[WWN_SIZE];
 	uint8_t node_name[WWN_SIZE];
 
-	uint8_t reserved_3[24];
+	uint8_t reserved_3[4];
+	uint16_t prli_nvme_svc_param_word_0;	/* Bits 15-0 of word 0 */
+	uint16_t prli_nvme_svc_param_word_3;	/* Bits 15-0 of word 3 */
+	uint16_t nvme_first_burst_size;
+	uint8_t reserved_4[14];
 };
 
 /*
@@ -819,6 +829,7 @@ struct logio_entry_24xx {
 #define LCF_CLASS_2		BIT_8	/* Enable class 2 during PLOGI. */
 #define LCF_FREE_NPORT		BIT_7	/* Release NPORT handle after LOGO. */
 #define LCF_EXPL_LOGO		BIT_6	/* Perform an explicit LOGO. */
+#define LCF_NVME_PRLI		BIT_6   /* Perform NVME FC4 PRLI */
 #define LCF_SKIP_PRLI		BIT_5	/* Skip PRLI after PLOGI. */
 #define LCF_IMPL_LOGO_ALL	BIT_5	/* Implicit LOGO to all ports. */
 #define LCF_COND_PLOGI		BIT_4	/* PLOGI only if not logged-in. */
diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
index beebf96c23d4..6fbee11c1a18 100644
--- a/drivers/scsi/qla2xxx/qla_gbl.h
+++ b/drivers/scsi/qla2xxx/qla_gbl.h
@@ -99,6 +99,7 @@ extern struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *,
 extern int qla2xxx_delete_qpair(struct scsi_qla_host *, struct qla_qpair *);
 void qla2x00_fcport_event_handler(scsi_qla_host_t *, struct event_arg *);
 int qla24xx_async_gpdb(struct scsi_qla_host *, fc_port_t *, u8);
+int qla24xx_async_prli(struct scsi_qla_host *, fc_port_t *);
 int qla24xx_async_notify_ack(scsi_qla_host_t *, fc_port_t *,
 	struct imm_ntfy_from_isp *, int);
 int qla24xx_post_newsess_work(struct scsi_qla_host *, port_id_t *, u8 *,
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 48c0a58330d4..8a2586a04961 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -37,8 +37,11 @@ static struct qla_chip_state_84xx *qla84xx_get_chip(struct scsi_qla_host *);
 static int qla84xx_init_chip(scsi_qla_host_t *);
 static int qla25xx_init_queues(struct qla_hw_data *);
 static int qla24xx_post_gpdb_work(struct scsi_qla_host *, fc_port_t *, u8);
+static int qla24xx_post_prli_work(struct scsi_qla_host*, fc_port_t *);
 static void qla24xx_handle_plogi_done_event(struct scsi_qla_host *,
     struct event_arg *);
+static void qla24xx_handle_prli_done_event(struct scsi_qla_host *,
+    struct event_arg *);
 
 /* SRB Extensions ---------------------------------------------------------- */
 
@@ -191,6 +194,10 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
 	lio->timeout = qla2x00_async_iocb_timeout;
 	sp->done = qla2x00_async_login_sp_done;
 	lio->u.logio.flags |= SRB_LOGIN_COND_PLOGI;
+
+	if (fcport->fc4f_nvme)
+		lio->u.logio.flags |= SRB_LOGIN_SKIP_PRLI;
+
 	if (data[1] & QLA_LOGIO_LOGIN_RETRIED)
 		lio->u.logio.flags |= SRB_LOGIN_RETRIED;
 	rval = qla2x00_start_sp(sp);
@@ -327,7 +334,7 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
 	u16 i, n, found = 0, loop_id;
 	port_id_t id;
 	u64 wwn;
-	u8 opt = 0;
+	u8 opt = 0, current_login_state;
 
 	fcport = ea->fcport;
 
@@ -414,7 +421,12 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
 			fcport->login_pause = 1;
 		}
 
-		switch (e->current_login_state) {
+		if  (fcport->fc4f_nvme)
+			current_login_state = e->current_login_state >> 4;
+		else
+			current_login_state = e->current_login_state & 0xf;
+
+		switch (current_login_state) {
 		case DSC_LS_PRLI_COMP:
 			ql_dbg(ql_dbg_disc, vha, 0x20e4,
 			    "%s %d %8phC post gpdb\n",
@@ -422,7 +434,6 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
 			opt = PDO_FORCE_ADISC;
 			qla24xx_post_gpdb_work(vha, fcport, opt);
 			break;
-
 		case DSC_LS_PORT_UNAVAIL:
 		default:
 			if (fcport->loop_id == FC_NO_LOOP_ID) {
@@ -665,6 +676,104 @@ void qla24xx_async_gpdb_sp_done(void *s, int res)
 	sp->free(sp);
 }
 
+static int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport)
+{
+	struct qla_work_evt *e;
+
+	e = qla2x00_alloc_work(vha, QLA_EVT_PRLI);
+	if (!e)
+		return QLA_FUNCTION_FAILED;
+
+	e->u.fcport.fcport = fcport;
+
+	return qla2x00_post_work(vha, e);
+}
+
+static void
+qla2x00_async_prli_sp_done(void *ptr, int res)
+{
+	srb_t *sp = ptr;
+	struct scsi_qla_host *vha = sp->vha;
+	struct srb_iocb *lio = &sp->u.iocb_cmd;
+	struct event_arg ea;
+
+	ql_dbg(ql_dbg_disc, vha, 0x2129,
+	    "%s %8phC res %d \n", __func__,
+	    sp->fcport->port_name, res);
+
+	sp->fcport->flags &= ~FCF_ASYNC_SENT;
+
+	if (!test_bit(UNLOADING, &vha->dpc_flags)) {
+		memset(&ea, 0, sizeof(ea));
+		ea.event = FCME_PRLI_DONE;
+		ea.fcport = sp->fcport;
+		ea.data[0] = lio->u.logio.data[0];
+		ea.data[1] = lio->u.logio.data[1];
+		ea.iop[0] = lio->u.logio.iop[0];
+		ea.iop[1] = lio->u.logio.iop[1];
+		ea.sp = sp;
+
+		qla2x00_fcport_event_handler(vha, &ea);
+	}
+
+	sp->free(sp);
+}
+
+int
+qla24xx_async_prli(struct scsi_qla_host *vha, fc_port_t *fcport)
+{
+	srb_t *sp;
+	struct srb_iocb *lio;
+	int rval = QLA_FUNCTION_FAILED;
+
+	if (!vha->flags.online)
+		return rval;
+
+	if (fcport->fw_login_state == DSC_LS_PLOGI_PEND ||
+	    fcport->fw_login_state == DSC_LS_PLOGI_COMP ||
+	    fcport->fw_login_state == DSC_LS_PRLI_PEND)
+		return rval;
+
+	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+	if (!sp)
+		return rval;
+
+	fcport->flags |= FCF_ASYNC_SENT;
+	fcport->logout_completed = 0;
+
+	sp->type = SRB_PRLI_CMD;
+	sp->name = "prli";
+	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+
+	lio = &sp->u.iocb_cmd;
+	lio->timeout = qla2x00_async_iocb_timeout;
+	sp->done = qla2x00_async_prli_sp_done;
+	lio->u.logio.flags = 0;
+
+	if  (fcport->fc4f_nvme)
+		lio->u.logio.flags |= SRB_LOGIN_NVME_PRLI;
+
+	rval = qla2x00_start_sp(sp);
+	if (rval != QLA_SUCCESS) {
+		fcport->flags &= ~FCF_ASYNC_SENT;
+		fcport->flags |= FCF_LOGIN_NEEDED;
+		set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+		goto done_free_sp;
+	}
+
+	ql_dbg(ql_dbg_disc, vha, 0x211b,
+	    "Async-prli - %8phC hdl=%x, loopid=%x portid=%06x retries=%d.\n",
+	    fcport->port_name, sp->handle, fcport->loop_id,
+	    fcport->d_id.b24, fcport->login_retry);
+
+	return rval;
+
+done_free_sp:
+	sp->free(sp);
+	fcport->flags &= ~FCF_ASYNC_SENT;
+	return rval;
+}
+
 static int qla24xx_post_gpdb_work(struct scsi_qla_host *vha, fc_port_t *fcport,
     u8 opt)
 {
@@ -1126,6 +1235,9 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
 	case FCME_PLOGI_DONE:	/* Initiator side sent LLIOCB */
 		qla24xx_handle_plogi_done_event(vha, ea);
 		break;
+	case FCME_PRLI_DONE:
+		qla24xx_handle_prli_done_event(vha, ea);
+		break;
 	case FCME_GPDB_DONE:
 		qla24xx_handle_gpdb_event(vha, ea);
 		break;
@@ -1308,6 +1420,27 @@ qla24xx_async_abort_command(srb_t *sp)
 }
 
 static void
+qla24xx_handle_prli_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+{
+	switch (ea->data[0]) {
+	case MBS_COMMAND_COMPLETE:
+		ql_dbg(ql_dbg_disc, vha, 0x2118,
+		    "%s %d %8phC post gpdb\n",
+		    __func__, __LINE__, ea->fcport->port_name);
+
+		ea->fcport->chip_reset = vha->hw->base_qpair->chip_reset;
+		ea->fcport->logout_on_delete = 1;
+		qla24xx_post_gpdb_work(vha, ea->fcport, 0);
+		break;
+	default:
+		ql_dbg(ql_dbg_disc, vha, 0x2119,
+		    "%s %d %8phC unhandle event of %x\n",
+		    __func__, __LINE__, ea->fcport->port_name, ea->data[0]);
+		break;
+	}
+}
+
+static void
 qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
 {
 	port_id_t cid;	/* conflict Nport id */
@@ -1319,12 +1452,19 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
 		 * force a relogin attempt via implicit LOGO, PLOGI, and PRLI
 		 * requests.
 		 */
-		ql_dbg(ql_dbg_disc, vha, 0x20ea,
-		    "%s %d %8phC post gpdb\n",
-		    __func__, __LINE__, ea->fcport->port_name);
-		ea->fcport->chip_reset = vha->hw->base_qpair->chip_reset;
-		ea->fcport->logout_on_delete = 1;
-		qla24xx_post_gpdb_work(vha, ea->fcport, 0);
+		if (ea->fcport->fc4f_nvme) {
+			ql_dbg(ql_dbg_disc, vha, 0x2117,
+				"%s %d %8phC post prli\n",
+				__func__, __LINE__, ea->fcport->port_name);
+			qla24xx_post_prli_work(vha, ea->fcport);
+		} else {
+			ql_dbg(ql_dbg_disc, vha, 0x20ea,
+				"%s %d %8phC post gpdb\n",
+				__func__, __LINE__, ea->fcport->port_name);
+			ea->fcport->chip_reset = vha->hw->base_qpair->chip_reset;
+			ea->fcport->logout_on_delete = 1;
+			qla24xx_post_gpdb_work(vha, ea->fcport, 0);
+		}
 		break;
 	case MBS_COMMAND_ERROR:
 		ql_dbg(ql_dbg_disc, vha, 0x20eb, "%s %d %8phC cmd error %x\n",
@@ -4639,6 +4779,16 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *vha)
 				new_fcport->fp_speed = swl[swl_idx].fp_speed;
 				new_fcport->fc4_type = swl[swl_idx].fc4_type;
 
+				new_fcport->nvme_flag = 0;
+				if (vha->flags.nvme_enabled &&
+				    swl[swl_idx].fc4f_nvme) {
+					new_fcport->fc4f_nvme =
+					    swl[swl_idx].fc4f_nvme;
+					ql_log(ql_log_info, vha, 0x2131,
+					    "FOUND: NVME port %8phC as FC Type 28h\n",
+					    new_fcport->port_name);
+				}
+
 				if (swl[swl_idx].d_id.b.rsvd_1 != 0) {
 					last_dev = 1;
 				}
diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
index ac49febbac76..daa53235a28a 100644
--- a/drivers/scsi/qla2xxx/qla_iocb.c
+++ b/drivers/scsi/qla2xxx/qla_iocb.c
@@ -2211,12 +2211,30 @@ qla2x00_alloc_iocbs(struct scsi_qla_host *vha, srb_t *sp)
 }
 
 static void
+qla24xx_prli_iocb(srb_t *sp, struct logio_entry_24xx *logio)
+{
+	struct srb_iocb *lio = &sp->u.iocb_cmd;
+
+	logio->entry_type = LOGINOUT_PORT_IOCB_TYPE;
+	logio->control_flags = cpu_to_le16(LCF_COMMAND_PRLI);
+	if (lio->u.logio.flags & SRB_LOGIN_NVME_PRLI)
+		logio->control_flags |= LCF_NVME_PRLI;
+
+	logio->nport_handle = cpu_to_le16(sp->fcport->loop_id);
+	logio->port_id[0] = sp->fcport->d_id.b.al_pa;
+	logio->port_id[1] = sp->fcport->d_id.b.area;
+	logio->port_id[2] = sp->fcport->d_id.b.domain;
+	logio->vp_index = sp->vha->vp_idx;
+}
+
+static void
 qla24xx_login_iocb(srb_t *sp, struct logio_entry_24xx *logio)
 {
 	struct srb_iocb *lio = &sp->u.iocb_cmd;
 
 	logio->entry_type = LOGINOUT_PORT_IOCB_TYPE;
 	logio->control_flags = cpu_to_le16(LCF_COMMAND_PLOGI);
+
 	if (lio->u.logio.flags & SRB_LOGIN_COND_PLOGI)
 		logio->control_flags |= cpu_to_le16(LCF_COND_PLOGI);
 	if (lio->u.logio.flags & SRB_LOGIN_SKIP_PRLI)
@@ -3162,6 +3180,9 @@ qla2x00_start_sp(srb_t *sp)
 		    qla24xx_login_iocb(sp, pkt) :
 		    qla2x00_login_iocb(sp, pkt);
 		break;
+	case SRB_PRLI_CMD:
+		qla24xx_prli_iocb(sp, pkt);
+		break;
 	case SRB_LOGOUT_CMD:
 		IS_FWI2_CAPABLE(ha) ?
 		    qla24xx_logout_iocb(sp, pkt) :
diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
index f02a2baffb5b..1eac67e8fdfd 100644
--- a/drivers/scsi/qla2xxx/qla_mbx.c
+++ b/drivers/scsi/qla2xxx/qla_mbx.c
@@ -5968,14 +5968,22 @@ int __qla24xx_parse_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport,
 {
 	int rval = QLA_SUCCESS;
 	uint64_t zero = 0;
+	u8 current_login_state, last_login_state;
+
+	if (fcport->fc4f_nvme) {
+		current_login_state = pd->current_login_state >> 4;
+		last_login_state = pd->last_login_state >> 4;
+	} else {
+		current_login_state = pd->current_login_state & 0xf;
+		last_login_state = pd->last_login_state & 0xf;
+	}
 
 	/* Check for logged in state. */
-	if (pd->current_login_state != PDS_PRLI_COMPLETE &&
-		pd->last_login_state != PDS_PRLI_COMPLETE) {
+	if (current_login_state != PDS_PRLI_COMPLETE &&
+	    last_login_state != PDS_PRLI_COMPLETE) {
 		ql_dbg(ql_dbg_mbx, vha, 0x119a,
 		    "Unable to verify login-state (%x/%x) for loop_id %x.\n",
-		    pd->current_login_state, pd->last_login_state,
-		    fcport->loop_id);
+		    current_login_state, last_login_state, fcport->loop_id);
 		rval = QLA_FUNCTION_FAILED;
 		goto gpd_error_out;
 	}
@@ -5998,12 +6006,17 @@ int __qla24xx_parse_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport,
 	fcport->d_id.b.al_pa = pd->port_id[2];
 	fcport->d_id.b.rsvd_1 = 0;
 
-	/* If not target must be initiator or unknown type. */
-	if ((pd->prli_svc_param_word_3[0] & BIT_4) == 0)
-		fcport->port_type = FCT_INITIATOR;
-	else
-		fcport->port_type = FCT_TARGET;
-
+	if (fcport->fc4f_nvme) {
+		fcport->nvme_prli_service_param =
+		    pd->prli_nvme_svc_param_word_3;
+		fcport->port_type = FCT_NVME;
+	} else {
+		/* If not target must be initiator or unknown type. */
+		if ((pd->prli_svc_param_word_3[0] & BIT_4) == 0)
+			fcport->port_type = FCT_INITIATOR;
+		else
+			fcport->port_type = FCT_TARGET;
+	}
 	/* Passback COS information. */
 	fcport->supported_classes = (pd->flags & PDF_CLASS_2) ?
 		FC_COS_CLASS2 : FC_COS_CLASS3;
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index 88e115fcea60..b2474952f858 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -4431,6 +4431,7 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
 	INIT_LIST_HEAD(&vha->plogi_ack_list);
 	INIT_LIST_HEAD(&vha->qp_list);
 	INIT_LIST_HEAD(&vha->gnl.fcports);
+	INIT_LIST_HEAD(&vha->nvme_rport_list);
 
 	spin_lock_init(&vha->work_lock);
 	spin_lock_init(&vha->cmd_list_lock);
@@ -4720,6 +4721,9 @@ qla2x00_do_work(struct scsi_qla_host *vha)
 			qla24xx_async_gpdb(vha, e->u.fcport.fcport,
 			    e->u.fcport.opt);
 			break;
+		case QLA_EVT_PRLI:
+			qla24xx_async_prli(vha, e->u.fcport.fcport);
+			break;
 		case QLA_EVT_GPSC:
 			qla24xx_async_gpsc(vha, e->u.fcport.fcport);
 			break;
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index 8f75d27daae2..a2b310de429b 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -963,7 +963,6 @@ static void qlt_free_session_done(struct work_struct *work)
 		sess->logout_on_delete, sess->keep_nport_handle,
 		sess->send_els_logo);
 
-
 	if (!IS_SW_RESV_ADDR(sess->d_id)) {
 		if (sess->send_els_logo) {
 			qlt_port_logo_t logo;
@@ -1118,6 +1117,9 @@ void qlt_unreg_sess(struct fc_port *sess)
 	sess->last_rscn_gen = sess->rscn_gen;
 	sess->last_login_gen = sess->login_gen;
 
+	if (sess->nvme_flag & NVME_FLAG_REGISTERED)
+		schedule_work(&sess->nvme_del_work);
+
 	INIT_WORK(&sess->free_work, qlt_free_session_done);
 	schedule_work(&sess->free_work);
 }
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/6] qla2xxx: Add FC-NVMe command handling
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
  2017-06-21 20:48 ` [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling Madhani, Himanshu
@ 2017-06-21 20:48 ` Madhani, Himanshu
  2017-06-22  6:28   ` Hannes Reinecke
  2017-06-21 20:48 ` [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration Madhani, Himanshu
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Duane Grigsby <duane.grigsby@cavium.com>

This patch adds logic to  handle the completion of
FC-NVMe commands and creates a sub-command in the SRB
command structure to manage NVMe commands.

Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/scsi/qla2xxx/qla_def.h | 17 +++++++++
 drivers/scsi/qla2xxx/qla_fw.h  | 22 ++++++++++--
 drivers/scsi/qla2xxx/qla_isr.c | 79 ++++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/qla2xxx/qla_os.c  | 18 ++++++++--
 4 files changed, 131 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index 6f8df9cea8ff..4d889eb2993e 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -412,6 +412,20 @@ struct srb_iocb {
 		struct {
 			struct imm_ntfy_from_isp *ntfy;
 		} nack;
+		struct {
+			__le16 comp_status;
+			uint16_t rsp_pyld_len;
+			uint8_t	aen_op;
+			void *desc;
+
+			/* These are only used with ls4 requests */
+			int cmd_len;
+			int rsp_len;
+			dma_addr_t cmd_dma;
+			dma_addr_t rsp_dma;
+			uint32_t dl;
+			uint32_t timeout_sec;
+		} nvme;
 	} u;
 
 	struct timer_list timer;
@@ -437,6 +451,7 @@ struct srb_iocb {
 #define SRB_NACK_PLOGI	16
 #define SRB_NACK_PRLI	17
 #define SRB_NACK_LOGO	18
+#define SRB_NVME_CMD	19
 #define SRB_PRLI_CMD	21
 
 enum {
@@ -4110,6 +4125,8 @@ typedef struct scsi_qla_host {
 	struct		nvme_fc_local_port *nvme_local_port;
 	atomic_t	nvme_ref_count;
 	struct list_head nvme_rport_list;
+	atomic_t 	nvme_active_aen_cnt;
+	uint16_t	nvme_last_rptd_aen;
 
 	uint16_t	fcoe_vlan_id;
 	uint16_t	fcoe_fcf_idx;
diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h
index dcae62d4cbeb..b9c9886e8b1d 100644
--- a/drivers/scsi/qla2xxx/qla_fw.h
+++ b/drivers/scsi/qla2xxx/qla_fw.h
@@ -7,6 +7,9 @@
 #ifndef __QLA_FW_H
 #define __QLA_FW_H
 
+#include <linux/nvme.h>
+#include <linux/nvme-fc.h>
+
 #define MBS_CHECKSUM_ERROR	0x4010
 #define MBS_INVALID_PRODUCT_KEY	0x4020
 
@@ -603,9 +606,14 @@ struct sts_entry_24xx {
 
 	uint32_t residual_len;		/* FW calc residual transfer length. */
 
-	uint16_t reserved_1;
+	union {
+		uint16_t reserved_1;
+		uint16_t nvme_rsp_pyld_len;
+	};
+
 	uint16_t state_flags;		/* State flags. */
 #define SF_TRANSFERRED_DATA	BIT_11
+#define SF_NVME_ERSP            BIT_6
 #define SF_FCP_RSP_DMA		BIT_0
 
 	uint16_t retry_delay;
@@ -615,8 +623,16 @@ struct sts_entry_24xx {
 	uint32_t rsp_residual_count;	/* FCP RSP residual count. */
 
 	uint32_t sense_len;		/* FCP SENSE length. */
-	uint32_t rsp_data_len;		/* FCP response data length. */
-	uint8_t data[28];		/* FCP response/sense information. */
+
+	union {
+		struct {
+			uint32_t rsp_data_len;	/* FCP response data length  */
+			uint8_t data[28];	/* FCP rsp/sense information */
+		};
+		struct nvme_fc_ersp_iu nvme_ersp;
+		uint8_t nvme_ersp_data[32];
+	};
+
 	/*
 	 * If DIF Error is set in comp_status, these additional fields are
 	 * defined:
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index 40385bc1d1fa..477aea7c9a88 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -1799,6 +1799,79 @@ qla24xx_tm_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
 	sp->done(sp, 0);
 }
 
+static void
+qla24xx_nvme_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
+{
+	const char func[] = "NVME-IOCB";
+	fc_port_t *fcport;
+	srb_t *sp;
+	struct srb_iocb *iocb;
+	struct sts_entry_24xx *sts = (struct sts_entry_24xx *)tsk;
+	uint16_t        state_flags;
+	struct nvmefc_fcp_req *fd;
+	uint16_t        ret = 0;
+	struct srb_iocb *nvme;
+
+	sp = qla2x00_get_sp_from_handle(vha, func, req, tsk);
+	if (!sp)
+		return;
+
+	iocb = &sp->u.iocb_cmd;
+	fcport = sp->fcport;
+	iocb->u.nvme.comp_status = le16_to_cpu(sts->comp_status);
+	state_flags  = le16_to_cpu(sts->state_flags);
+	fd = iocb->u.nvme.desc;
+	nvme = &sp->u.iocb_cmd;
+
+	if (unlikely(nvme->u.nvme.aen_op))
+		atomic_dec(&sp->vha->nvme_active_aen_cnt);
+
+	/*
+	 * State flags: Bit 6 and 0.
+	 * If 0 is set, we don't care about 6.
+	 * both cases resp was dma'd to host buffer
+	 * if both are 0, that is good path case.
+	 * if six is set and 0 is clear, we need to
+	 * copy resp data from status iocb to resp buffer.
+	 */
+	if (!(state_flags & (SF_FCP_RSP_DMA | SF_NVME_ERSP))) {
+		iocb->u.nvme.rsp_pyld_len = 0;
+	} else if ((state_flags & SF_FCP_RSP_DMA)) {
+		iocb->u.nvme.rsp_pyld_len = le16_to_cpu(sts->nvme_rsp_pyld_len);
+	} else if (state_flags & SF_NVME_ERSP) {
+		uint32_t *inbuf, *outbuf;
+		uint16_t iter;
+
+		inbuf = (uint32_t *)&sts->nvme_ersp_data;
+		outbuf = (uint32_t *)fd->rspaddr;
+		iocb->u.nvme.rsp_pyld_len = le16_to_cpu(sts->nvme_rsp_pyld_len);
+		iter = iocb->u.nvme.rsp_pyld_len >> 2;
+		for (; iter; iter--)
+			*outbuf++ = swab32(*inbuf++);
+	} else { /* unhandled case */
+	    ql_log(ql_log_warn, fcport->vha, 0x503a,
+		"NVME-%s error. Unhandled state_flags of %x\n",
+		sp->name, state_flags);
+	}
+
+	fd->transferred_length = fd->payload_length -
+	    le32_to_cpu(sts->residual_len);
+
+	if (sts->entry_status) {
+		ql_log(ql_log_warn, fcport->vha, 0x5038,
+		    "NVME-%s error - hdl=%x entry-status(%x).\n",
+		    sp->name, sp->handle, sts->entry_status);
+		ret = QLA_FUNCTION_FAILED;
+	} else if (sts->comp_status != cpu_to_le16(CS_COMPLETE)) {
+		ql_log(ql_log_warn, fcport->vha, 0x5039,
+		    "NVME-%s error - hdl=%x completion status(%x) resid=%x  ox_id=%x\n",
+		    sp->name, sp->handle, sts->comp_status,
+		    le32_to_cpu(sts->residual_len), sts->ox_id);
+		ret = QLA_FUNCTION_FAILED;
+	}
+	sp->done(sp, ret);
+}
+
 /**
  * qla2x00_process_response_queue() - Process response queue entries.
  * @ha: SCSI driver HA context
@@ -2289,6 +2362,12 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
 		return;
 	}
 
+	/* NVME completion. */
+	if (sp->type == SRB_NVME_CMD) {
+		qla24xx_nvme_iocb_entry(vha, req, pkt);
+		return;
+	}
+
 	if (unlikely((state_flags & BIT_1) && (sp->type == SRB_BIDI_CMD))) {
 		qla25xx_process_bidir_status_iocb(vha, pkt, req, handle);
 		return;
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index b2474952f858..ef5211fd2154 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -695,8 +695,10 @@ qla2x00_sp_free_dma(void *ptr)
 	}
 
 end:
-	CMD_SP(cmd) = NULL;
-	qla2x00_rel_sp(sp);
+	if (sp->type != SRB_NVME_CMD) {
+		CMD_SP(cmd) = NULL;
+		qla2x00_rel_sp(sp);
+	}
 }
 
 void
@@ -5996,6 +5998,18 @@ qla2x00_timer(scsi_qla_host_t *vha)
 	if (!list_empty(&vha->work_list))
 		start_dpc++;
 
+	/*
+	 * FC-NVME
+	 * see if the active AEN count has changed from what was last reported.
+	 */
+	if (atomic_read(&vha->nvme_active_aen_cnt) != vha->nvme_last_rptd_aen) {
+		vha->nvme_last_rptd_aen =
+		    atomic_read(&vha->nvme_active_aen_cnt);
+		ql_log(ql_log_info, vha, 0x3002,
+		    "reporting new aen count of %d to the fw\n",
+		    vha->nvme_last_rptd_aen);
+	}
+
 	/* Schedule the DPC routine if needed */
 	if ((test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags) ||
 	    test_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags) ||
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
  2017-06-21 20:48 ` [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling Madhani, Himanshu
  2017-06-21 20:48 ` [PATCH v2 2/6] qla2xxx: Add FC-NVMe command handling Madhani, Himanshu
@ 2017-06-21 20:48 ` Madhani, Himanshu
  2017-06-22  6:32   ` Hannes Reinecke
  2017-06-22  9:46   ` Johannes Thumshirn
  2017-06-21 20:48 ` [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server Madhani, Himanshu
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Duane Grigsby <duane.grigsby@cavium.com>

This code provides the interfaces to register remote and local ports
of FC4 type 0x28 with the FC-NVMe transport and transports the
requests (FC-NVMe FC link services and FC-NVMe commands IUs) to the
fabric. It also provides the support for allocating h/w queues and
aborting FC-NVMe FC requests.

Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
---
 drivers/scsi/qla2xxx/Makefile   |   2 +-
 drivers/scsi/qla2xxx/qla_dbg.c  |   2 +-
 drivers/scsi/qla2xxx/qla_def.h  |   6 +
 drivers/scsi/qla2xxx/qla_gbl.h  |  11 +
 drivers/scsi/qla2xxx/qla_init.c |   8 +
 drivers/scsi/qla2xxx/qla_iocb.c |  36 ++
 drivers/scsi/qla2xxx/qla_isr.c  |  19 +
 drivers/scsi/qla2xxx/qla_mbx.c  |  21 ++
 drivers/scsi/qla2xxx/qla_nvme.c | 756 ++++++++++++++++++++++++++++++++++++++++
 drivers/scsi/qla2xxx/qla_nvme.h | 132 +++++++
 drivers/scsi/qla2xxx/qla_os.c   |  40 ++-
 11 files changed, 1024 insertions(+), 9 deletions(-)
 create mode 100644 drivers/scsi/qla2xxx/qla_nvme.c
 create mode 100644 drivers/scsi/qla2xxx/qla_nvme.h

diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile
index 44def6bb4bb0..0b767a0bb308 100644
--- a/drivers/scsi/qla2xxx/Makefile
+++ b/drivers/scsi/qla2xxx/Makefile
@@ -1,6 +1,6 @@
 qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \
 		qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o qla_bsg.o \
-		qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o
+		qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o
 
 obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o
 obj-$(CONFIG_TCM_QLA2XXX) += tcm_qla2xxx.o
diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c
index cf4f47603a91..d840529fc023 100644
--- a/drivers/scsi/qla2xxx/qla_dbg.c
+++ b/drivers/scsi/qla2xxx/qla_dbg.c
@@ -15,7 +15,7 @@
  * |                              |                    | 0x015b-0x0160	|
  * |                              |                    | 0x016e		|
  * | Mailbox commands             |       0x1199       | 0x1193		|
- * | Device Discovery             |       0x2131       | 0x210e-0x2116  |
+ * | Device Discovery             |       0x2134       | 0x210e-0x2116  |
  * |				  | 		       | 0x211a         |
  * |                              |                    | 0x211c-0x2128  |
  * |                              |                    | 0x212a-0x2130  |
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index 4d889eb2993e..0dbcb84011b0 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -37,6 +37,7 @@
 #include "qla_bsg.h"
 #include "qla_nx.h"
 #include "qla_nx2.h"
+#include "qla_nvme.h"
 #define QLA2XXX_DRIVER_NAME	"qla2xxx"
 #define QLA2XXX_APIDEV		"ql2xapidev"
 #define QLA2XXX_MANUFACTURER	"QLogic Corporation"
@@ -423,6 +424,7 @@ struct srb_iocb {
 			int rsp_len;
 			dma_addr_t cmd_dma;
 			dma_addr_t rsp_dma;
+			enum nvmefc_fcp_datadir dir;
 			uint32_t dl;
 			uint32_t timeout_sec;
 		} nvme;
@@ -452,6 +454,7 @@ struct srb_iocb {
 #define SRB_NACK_PRLI	17
 #define SRB_NACK_LOGO	18
 #define SRB_NVME_CMD	19
+#define SRB_NVME_LS	20
 #define SRB_PRLI_CMD	21
 
 enum {
@@ -467,6 +470,7 @@ typedef struct srb {
 	uint8_t cmd_type;
 	uint8_t pad[3];
 	atomic_t ref_count;
+	wait_queue_head_t nvme_ls_waitQ;
 	struct fc_port *fcport;
 	struct scsi_qla_host *vha;
 	uint32_t handle;
@@ -2298,6 +2302,7 @@ typedef struct fc_port {
 
 	struct work_struct nvme_del_work;
 	atomic_t nvme_ref_count;
+	wait_queue_head_t nvme_waitQ;
 	uint32_t nvme_prli_service_param;
 #define NVME_PRLI_SP_CONF       BIT_7
 #define NVME_PRLI_SP_INITIATOR  BIT_5
@@ -4124,6 +4129,7 @@ typedef struct scsi_qla_host {
 
 	struct		nvme_fc_local_port *nvme_local_port;
 	atomic_t	nvme_ref_count;
+	wait_queue_head_t nvme_waitQ;
 	struct list_head nvme_rport_list;
 	atomic_t 	nvme_active_aen_cnt;
 	uint16_t	nvme_last_rptd_aen;
diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
index 6fbee11c1a18..c6af45f7d5d6 100644
--- a/drivers/scsi/qla2xxx/qla_gbl.h
+++ b/drivers/scsi/qla2xxx/qla_gbl.h
@@ -10,6 +10,16 @@
 #include <linux/interrupt.h>
 
 /*
+ * Global functions prototype in qla_nvme.c source file.
+ */
+extern void qla_nvme_register_hba(scsi_qla_host_t *);
+extern int  qla_nvme_register_remote(scsi_qla_host_t *, fc_port_t *);
+extern void qla_nvme_delete(scsi_qla_host_t *);
+extern void qla_nvme_abort(struct qla_hw_data *, srb_t *sp);
+extern void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *,
+    struct req_que *);
+
+/*
  * Global Function Prototypes in qla_init.c source file.
  */
 extern int qla2x00_initialize_adapter(scsi_qla_host_t *);
@@ -141,6 +151,7 @@ extern int ql2xiniexchg;
 extern int ql2xfwholdabts;
 extern int ql2xmvasynctoatio;
 extern int ql2xuctrlirq;
+extern int ql2xnvmeenable;
 
 extern int qla2x00_loop_reset(scsi_qla_host_t *);
 extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int);
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 8a2586a04961..7286a80f796c 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -4513,6 +4513,11 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
 	fcport->deleted = 0;
 	fcport->logout_on_delete = 1;
 
+	if (fcport->fc4f_nvme) {
+		qla_nvme_register_remote(vha, fcport);
+		return;
+	}
+
 	qla2x00_set_fcport_state(fcport, FCS_ONLINE);
 	qla2x00_iidma_fcport(vha, fcport);
 	qla24xx_update_fcport_fcp_prio(vha, fcport);
@@ -4662,6 +4667,9 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha)
 			break;
 	} while (0);
 
+	if (!vha->nvme_local_port && vha->flags.nvme_enabled)
+		qla_nvme_register_hba(vha);
+
 	if (rval)
 		ql_dbg(ql_dbg_disc, vha, 0x2068,
 		    "Configure fabric error exit rval=%d.\n", rval);
diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
index daa53235a28a..d40fa000615c 100644
--- a/drivers/scsi/qla2xxx/qla_iocb.c
+++ b/drivers/scsi/qla2xxx/qla_iocb.c
@@ -3155,6 +3155,39 @@ static void qla2x00_send_notify_ack_iocb(srb_t *sp,
 	nack->u.isp24.vp_index = ntfy->u.isp24.vp_index;
 }
 
+/*
+ * Build NVME LS request
+ */
+static int
+qla_nvme_ls(srb_t *sp, struct pt_ls4_request *cmd_pkt)
+{
+	struct srb_iocb *nvme;
+	int     rval = QLA_SUCCESS;
+
+	nvme = &sp->u.iocb_cmd;
+	cmd_pkt->entry_type = PT_LS4_REQUEST;
+	cmd_pkt->entry_count = 1;
+	cmd_pkt->control_flags = CF_LS4_ORIGINATOR << CF_LS4_SHIFT;
+
+	cmd_pkt->timeout = cpu_to_le16(nvme->u.nvme.timeout_sec);
+	cmd_pkt->nport_handle = cpu_to_le16(sp->fcport->loop_id);
+	cmd_pkt->vp_index = sp->fcport->vha->vp_idx;
+
+	cmd_pkt->tx_dseg_count = 1;
+	cmd_pkt->tx_byte_count = nvme->u.nvme.cmd_len;
+	cmd_pkt->dseg0_len = nvme->u.nvme.cmd_len;
+	cmd_pkt->dseg0_address[0] = cpu_to_le32(LSD(nvme->u.nvme.cmd_dma));
+	cmd_pkt->dseg0_address[1] = cpu_to_le32(MSD(nvme->u.nvme.cmd_dma));
+
+	cmd_pkt->rx_dseg_count = 1;
+	cmd_pkt->rx_byte_count = nvme->u.nvme.rsp_len;
+	cmd_pkt->dseg1_len  = nvme->u.nvme.rsp_len;
+	cmd_pkt->dseg1_address[0] =  cpu_to_le32(LSD(nvme->u.nvme.rsp_dma));
+	cmd_pkt->dseg1_address[1] =  cpu_to_le32(MSD(nvme->u.nvme.rsp_dma));
+
+	return rval;
+}
+
 int
 qla2x00_start_sp(srb_t *sp)
 {
@@ -3211,6 +3244,9 @@ qla2x00_start_sp(srb_t *sp)
 	case SRB_FXIOCB_BCMD:
 		qlafx00_fxdisc_iocb(sp, pkt);
 		break;
+	case SRB_NVME_LS:
+		qla_nvme_ls(sp, pkt);
+		break;
 	case SRB_ABT_CMD:
 		IS_QLAFX00(ha) ?
 			qlafx00_abort_iocb(sp, pkt) :
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index 477aea7c9a88..011faa1dc618 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -2828,6 +2828,21 @@ qla24xx_abort_iocb_entry(scsi_qla_host_t *vha, struct req_que *req,
 	sp->done(sp, 0);
 }
 
+void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *vha, struct pt_ls4_request *pkt,
+    struct req_que *req)
+{
+	srb_t *sp;
+	const char func[] = "LS4_IOCB";
+	uint16_t comp_status;
+
+	sp = qla2x00_get_sp_from_handle(vha, func, req, pkt);
+	if (!sp)
+		return;
+
+	comp_status = le16_to_cpu(pkt->status);
+	sp->done(sp, comp_status);
+}
+
 /**
  * qla24xx_process_response_queue() - Process response queue entries.
  * @ha: SCSI driver HA context
@@ -2901,6 +2916,10 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
 		case CTIO_CRC2:
 			qlt_response_pkt_all_vps(vha, rsp, (response_t *)pkt);
 			break;
+		case PT_LS4_REQUEST:
+			qla24xx_nvme_ls4_iocb(vha, (struct pt_ls4_request *)pkt,
+			    rsp->req);
+			break;
 		case NOTIFY_ACK_TYPE:
 			if (pkt->handle == QLA_TGT_SKIP_HANDLE)
 				qlt_response_pkt_all_vps(vha, rsp,
diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
index 1eac67e8fdfd..0764b6172ed1 100644
--- a/drivers/scsi/qla2xxx/qla_mbx.c
+++ b/drivers/scsi/qla2xxx/qla_mbx.c
@@ -560,6 +560,8 @@ qla2x00_load_ram(scsi_qla_host_t *vha, dma_addr_t req_dma, uint32_t risc_addr,
 }
 
 #define	EXTENDED_BB_CREDITS	BIT_0
+#define	NVME_ENABLE_FLAG	BIT_3
+
 /*
  * qla2x00_execute_fw
  *     Start adapter firmware.
@@ -601,6 +603,9 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr)
 		} else
 			mcp->mb[4] = 0;
 
+		if (ql2xnvmeenable && IS_QLA27XX(ha))
+			mcp->mb[4] |= NVME_ENABLE_FLAG;
+
 		if (ha->flags.exlogins_enabled)
 			mcp->mb[4] |= ENABLE_EXTENDED_LOGIN;
 
@@ -941,6 +946,22 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha)
 			ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1191,
 			    "%s: Firmware supports Exchange Offload 0x%x\n",
 			    __func__, ha->fw_attributes_h);
+
+		/* bit 26 of fw_attributes */
+		if ((ha->fw_attributes_h & 0x400) && ql2xnvmeenable) {
+			struct init_cb_24xx *icb;
+
+			icb = (struct init_cb_24xx *)ha->init_cb;
+			/*
+			 * fw supports nvme and driver load
+			 * parameter requested nvme
+			 */
+			vha->flags.nvme_enabled = 1;
+			icb->firmware_options_2 &= cpu_to_le32(~0xf);
+			ha->zio_mode = 0;
+			ha->zio_timer = 0;
+		}
+
 	}
 
 	if (IS_QLA27XX(ha)) {
diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
new file mode 100644
index 000000000000..1da8fa8f641d
--- /dev/null
+++ b/drivers/scsi/qla2xxx/qla_nvme.c
@@ -0,0 +1,756 @@
+/*
+ * QLogic Fibre Channel HBA Driver
+ * Copyright (c)  2003-2017 QLogic Corporation
+ *
+ * See LICENSE.qla2xxx for copyright and licensing details.
+ */
+#include "qla_nvme.h"
+#include "qla_def.h"
+#include <linux/scatterlist.h>
+#include <linux/delay.h>
+#include <linux/nvme.h>
+#include <linux/nvme-fc.h>
+
+static struct nvme_fc_port_template qla_nvme_fc_transport;
+
+static void qla_nvme_unregister_remote_port(struct work_struct *);
+
+int qla_nvme_register_remote(scsi_qla_host_t *vha, fc_port_t *fcport)
+{
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	struct nvme_rport *rport;
+	int ret;
+
+	if (fcport->nvme_flag & NVME_FLAG_REGISTERED)
+		return 0;
+
+	if (!vha->flags.nvme_enabled) {
+		ql_log(ql_log_info, vha, 0x2100,
+		    "%s: Not registering target since Host NVME is not enabled\n",
+		    __func__);
+		return 0;
+	}
+
+	if (!(fcport->nvme_prli_service_param &
+	    (NVME_PRLI_SP_TARGET | NVME_PRLI_SP_DISCOVERY)))
+		return 0;
+
+	INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port);
+	rport = kzalloc(sizeof(*rport), GFP_KERNEL);
+	if (!rport) {
+		ql_log(ql_log_warn, vha, 0x2101,
+		    "%s: unable to alloc memory\n", __func__);
+		return -ENOMEM;
+	}
+
+	rport->req.port_name = wwn_to_u64(fcport->port_name);
+	rport->req.node_name = wwn_to_u64(fcport->node_name);
+	rport->req.port_role = 0;
+
+	if (fcport->nvme_prli_service_param & NVME_PRLI_SP_INITIATOR)
+		rport->req.port_role = FC_PORT_ROLE_NVME_INITIATOR;
+
+	if (fcport->nvme_prli_service_param & NVME_PRLI_SP_TARGET)
+		rport->req.port_role |= FC_PORT_ROLE_NVME_TARGET;
+
+	if (fcport->nvme_prli_service_param & NVME_PRLI_SP_DISCOVERY)
+		rport->req.port_role |= FC_PORT_ROLE_NVME_DISCOVERY;
+
+	rport->req.port_id = fcport->d_id.b24;
+
+	ql_log(ql_log_info, vha, 0x2102,
+	    "%s: traddr=pn-0x%016llx:nn-0x%016llx PortID:%06x\n",
+	    __func__, rport->req.port_name, rport->req.node_name,
+	    rport->req.port_id);
+
+	ret = nvme_fc_register_remoteport(vha->nvme_local_port, &rport->req,
+	    &fcport->nvme_remote_port);
+	if (ret) {
+		ql_log(ql_log_warn, vha, 0x212e,
+		    "Failed to register remote port. Transport returned %d\n",
+		    ret);
+		return ret;
+	}
+
+	fcport->nvme_remote_port->private = fcport;
+	fcport->nvme_flag |= NVME_FLAG_REGISTERED;
+	atomic_set(&fcport->nvme_ref_count, 1);
+	init_waitqueue_head(&fcport->nvme_waitQ);
+	rport->fcport = fcport;
+	list_add_tail(&rport->list, &vha->nvme_rport_list);
+#endif
+	return 0;
+}
+
+/* Allocate a queue for NVMe traffic */
+static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport, unsigned int qidx,
+    u16 qsize, void **handle)
+{
+	struct scsi_qla_host *vha;
+	struct qla_hw_data *ha;
+	struct qla_qpair *qpair;
+
+	if (!qidx)
+		qidx++;
+
+	vha = (struct scsi_qla_host *)lport->private;
+	ha = vha->hw;
+
+	ql_log(ql_log_info, vha, 0x2104,
+	    "%s: handle %p, idx =%d, qsize %d\n",
+	    __func__, handle, qidx, qsize);
+
+	if (qidx > qla_nvme_fc_transport.max_hw_queues) {
+		ql_log(ql_log_warn, vha, 0x212f,
+		    "%s: Illegal qidx=%d. Max=%d\n",
+		    __func__, qidx, qla_nvme_fc_transport.max_hw_queues);
+		return -EINVAL;
+	}
+
+	if (ha->queue_pair_map[qidx]) {
+		*handle = ha->queue_pair_map[qidx];
+		ql_log(ql_log_info, vha, 0x2121,
+		    "Returning existing qpair of %p for idx=%x\n",
+		    *handle, qidx);
+		return 0;
+	}
+
+	ql_log(ql_log_warn, vha, 0xffff,
+	    "allocating q for idx=%x w/o cpu mask\n", qidx);
+	qpair = qla2xxx_create_qpair(vha, 5, vha->vp_idx, true);
+	if (qpair == NULL) {
+		ql_log(ql_log_warn, vha, 0x2122,
+		    "Failed to allocate qpair\n");
+		return -EINVAL;
+	}
+	*handle = qpair;
+
+	return 0;
+}
+
+static void qla_nvme_sp_ls_done(void *ptr, int res)
+{
+	srb_t *sp = ptr;
+	struct srb_iocb *nvme;
+	struct nvmefc_ls_req   *fd;
+	struct nvme_private *priv;
+
+	if (atomic_read(&sp->ref_count) == 0) {
+		ql_log(ql_log_warn, sp->fcport->vha, 0x2123,
+		    "SP reference-count to ZERO on LS_done -- sp=%p.\n", sp);
+		return;
+	}
+
+	if (!atomic_dec_and_test(&sp->ref_count))
+		return;
+
+	if (res)
+		res = -EINVAL;
+
+	nvme = &sp->u.iocb_cmd;
+	fd = nvme->u.nvme.desc;
+	priv = fd->private;
+	priv->comp_status = res;
+	schedule_work(&priv->ls_work);
+	/* work schedule doesn't need the sp */
+	qla2x00_rel_sp(sp);
+}
+
+static void qla_nvme_sp_done(void *ptr, int res)
+{
+	srb_t *sp = ptr;
+	struct srb_iocb *nvme;
+	struct nvmefc_fcp_req *fd;
+
+	nvme = &sp->u.iocb_cmd;
+	fd = nvme->u.nvme.desc;
+
+	if (!atomic_dec_and_test(&sp->ref_count))
+		return;
+
+	if (!(sp->fcport->nvme_flag & NVME_FLAG_REGISTERED))
+		goto rel;
+
+	if (unlikely(nvme->u.nvme.comp_status || res))
+		fd->status = -EINVAL;
+	else
+		fd->status = 0;
+
+	fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len;
+	fd->done(fd);
+rel:
+	qla2xxx_rel_qpair_sp(sp->qpair, sp);
+}
+
+static void qla_nvme_ls_abort(struct nvme_fc_local_port *lport,
+    struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd)
+{
+	struct nvme_private *priv = fd->private;
+	fc_port_t *fcport = rport->private;
+	srb_t *sp = priv->sp;
+	int rval;
+	struct qla_hw_data *ha = fcport->vha->hw;
+
+	rval = ha->isp_ops->abort_command(sp);
+	if (rval != QLA_SUCCESS)
+		ql_log(ql_log_warn, fcport->vha, 0x2125,
+		    "%s: failed to abort LS command for SP:%p rval=%x\n",
+		    __func__, sp, rval);
+
+	ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
+	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);
+}
+
+static void qla_nvme_ls_complete(struct work_struct *work)
+{
+	struct nvme_private *priv =
+	    container_of(work, struct nvme_private, ls_work);
+	struct nvmefc_ls_req *fd = priv->fd;
+
+	fd->done(fd, priv->comp_status);
+}
+
+static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+    struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd)
+{
+	fc_port_t *fcport = (fc_port_t *)rport->private;
+	struct srb_iocb   *nvme;
+	struct nvme_private *priv = fd->private;
+	struct scsi_qla_host *vha;
+	int     rval = QLA_FUNCTION_FAILED;
+	struct qla_hw_data *ha;
+	srb_t           *sp;
+
+	if (!(fcport->nvme_flag & NVME_FLAG_REGISTERED))
+		return rval;
+
+	vha = fcport->vha;
+	ha = vha->hw;
+	/* Alloc SRB structure */
+	sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
+	if (!sp)
+		return rval;
+
+	sp->type = SRB_NVME_LS;
+	sp->name = "nvme_ls";
+	sp->done = qla_nvme_sp_ls_done;
+	atomic_set(&sp->ref_count, 1);
+	init_waitqueue_head(&sp->nvme_ls_waitQ);
+	nvme = &sp->u.iocb_cmd;
+	priv->sp = sp;
+	priv->fd = fd;
+	INIT_WORK(&priv->ls_work, qla_nvme_ls_complete);
+	nvme->u.nvme.desc = fd;
+	nvme->u.nvme.dir = 0;
+	nvme->u.nvme.dl = 0;
+	nvme->u.nvme.cmd_len = fd->rqstlen;
+	nvme->u.nvme.rsp_len = fd->rsplen;
+	nvme->u.nvme.rsp_dma = fd->rspdma;
+	nvme->u.nvme.timeout_sec = fd->timeout;
+	nvme->u.nvme.cmd_dma = dma_map_single(&ha->pdev->dev, fd->rqstaddr,
+	    fd->rqstlen, DMA_TO_DEVICE);
+	dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+	    fd->rqstlen, DMA_TO_DEVICE);
+
+	rval = qla2x00_start_sp(sp);
+	if (rval != QLA_SUCCESS) {
+		ql_log(ql_log_warn, vha, 0x700e,
+		    "qla2x00_start_sp failed = %d\n", rval);
+		atomic_dec(&sp->ref_count);
+		wake_up(&sp->nvme_ls_waitQ);
+		return rval;
+	}
+
+	return rval;
+}
+
+static void qla_nvme_fcp_abort(struct nvme_fc_local_port *lport,
+    struct nvme_fc_remote_port *rport, void *hw_queue_handle,
+    struct nvmefc_fcp_req *fd)
+{
+	struct nvme_private *priv = fd->private;
+	srb_t *sp = priv->sp;
+	int rval;
+	fc_port_t *fcport = rport->private;
+	struct qla_hw_data *ha = fcport->vha->hw;
+
+	rval = ha->isp_ops->abort_command(sp);
+	if (!rval)
+		ql_log(ql_log_warn, fcport->vha, 0x2127,
+		    "%s: failed to abort command for SP:%p rval=%x\n",
+		    __func__, sp, rval);
+
+	ql_dbg(ql_dbg_io, fcport->vha, 0x2126,
+	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);
+}
+
+static void qla_nvme_poll(struct nvme_fc_local_port *lport, void *hw_queue_handle)
+{
+	struct scsi_qla_host *vha = lport->private;
+	unsigned long flags;
+	struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle;
+
+	/* Acquire ring specific lock */
+	spin_lock_irqsave(&qpair->qp_lock, flags);
+	qla24xx_process_response_queue(vha, qpair->rsp);
+	spin_unlock_irqrestore(&qpair->qp_lock, flags);
+}
+
+static int qla2x00_start_nvme_mq(srb_t *sp)
+{
+	unsigned long   flags;
+	uint32_t        *clr_ptr;
+	uint32_t        index;
+	uint32_t        handle;
+	struct cmd_nvme *cmd_pkt;
+	uint16_t        cnt, i;
+	uint16_t        req_cnt;
+	uint16_t        tot_dsds;
+	uint16_t	avail_dsds;
+	uint32_t	*cur_dsd;
+	struct req_que *req = NULL;
+	struct scsi_qla_host *vha = sp->fcport->vha;
+	struct qla_hw_data *ha = vha->hw;
+	struct qla_qpair *qpair = sp->qpair;
+	struct srb_iocb *nvme = &sp->u.iocb_cmd;
+	struct scatterlist *sgl, *sg;
+	struct nvmefc_fcp_req *fd = nvme->u.nvme.desc;
+	uint32_t        rval = QLA_SUCCESS;
+
+	/* Setup qpair pointers */
+	req = qpair->req;
+	tot_dsds = fd->sg_cnt;
+
+	/* Acquire qpair specific lock */
+	spin_lock_irqsave(&qpair->qp_lock, flags);
+
+	/* Check for room in outstanding command list. */
+	handle = req->current_outstanding_cmd;
+	for (index = 1; index < req->num_outstanding_cmds; index++) {
+		handle++;
+		if (handle == req->num_outstanding_cmds)
+			handle = 1;
+		if (!req->outstanding_cmds[handle])
+			break;
+	}
+
+	if (index == req->num_outstanding_cmds) {
+		rval = -1;
+		goto queuing_error;
+	}
+	req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+	if (req->cnt < (req_cnt + 2)) {
+		cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
+		    RD_REG_DWORD_RELAXED(req->req_q_out);
+
+		if (req->ring_index < cnt)
+			req->cnt = cnt - req->ring_index;
+		else
+			req->cnt = req->length - (req->ring_index - cnt);
+
+		if (req->cnt < (req_cnt + 2)){
+			rval = -1;
+			goto queuing_error;
+		}
+	}
+
+	if (unlikely(!fd->sqid)) {
+		struct nvme_fc_cmd_iu *cmd = fd->cmdaddr;
+		if (cmd->sqe.common.opcode == nvme_admin_async_event) {
+			nvme->u.nvme.aen_op = 1;
+			atomic_inc(&vha->nvme_active_aen_cnt);
+		}
+	}
+
+	/* Build command packet. */
+	req->current_outstanding_cmd = handle;
+	req->outstanding_cmds[handle] = sp;
+	sp->handle = handle;
+	req->cnt -= req_cnt;
+
+	cmd_pkt = (struct cmd_nvme *)req->ring_ptr;
+	cmd_pkt->handle = MAKE_HANDLE(req->id, handle);
+
+	/* Zero out remaining portion of packet. */
+	clr_ptr = (uint32_t *)cmd_pkt + 2;
+	memset(clr_ptr, 0, REQUEST_ENTRY_SIZE - 8);
+
+	cmd_pkt->entry_status = 0;
+
+	/* Update entry type to indicate Command NVME IOCB */
+	cmd_pkt->entry_type = COMMAND_NVME;
+
+	/* No data transfer how do we check buffer len == 0?? */
+	if (fd->io_dir == NVMEFC_FCP_READ) {
+		cmd_pkt->control_flags =
+		    cpu_to_le16(CF_READ_DATA | CF_NVME_ENABLE);
+		vha->qla_stats.input_bytes += fd->payload_length;
+		vha->qla_stats.input_requests++;
+	} else if (fd->io_dir == NVMEFC_FCP_WRITE) {
+		cmd_pkt->control_flags =
+		    cpu_to_le16(CF_WRITE_DATA | CF_NVME_ENABLE);
+		vha->qla_stats.output_bytes += fd->payload_length;
+		vha->qla_stats.output_requests++;
+	} else if (fd->io_dir == 0) {
+		cmd_pkt->control_flags = cpu_to_le16(CF_NVME_ENABLE);
+	}
+
+	/* Set NPORT-ID */
+	cmd_pkt->nport_handle = cpu_to_le16(sp->fcport->loop_id);
+	cmd_pkt->port_id[0] = sp->fcport->d_id.b.al_pa;
+	cmd_pkt->port_id[1] = sp->fcport->d_id.b.area;
+	cmd_pkt->port_id[2] = sp->fcport->d_id.b.domain;
+	cmd_pkt->vp_index = sp->fcport->vha->vp_idx;
+
+	/* NVME RSP IU */
+	cmd_pkt->nvme_rsp_dsd_len = cpu_to_le16(fd->rsplen);
+	cmd_pkt->nvme_rsp_dseg_address[0] = cpu_to_le32(LSD(fd->rspdma));
+	cmd_pkt->nvme_rsp_dseg_address[1] = cpu_to_le32(MSD(fd->rspdma));
+
+	/* NVME CNMD IU */
+	cmd_pkt->nvme_cmnd_dseg_len = cpu_to_le16(fd->cmdlen);
+	cmd_pkt->nvme_cmnd_dseg_address[0] = cpu_to_le32(LSD(fd->cmddma));
+	cmd_pkt->nvme_cmnd_dseg_address[1] = cpu_to_le32(MSD(fd->cmddma));
+
+	cmd_pkt->dseg_count = cpu_to_le16(tot_dsds);
+	cmd_pkt->byte_count = cpu_to_le32(fd->payload_length);
+
+	/* One DSD is available in the Command Type NVME IOCB */
+	avail_dsds = 1;
+	cur_dsd = (uint32_t *)&cmd_pkt->nvme_data_dseg_address[0];
+	sgl = fd->first_sgl;
+
+	/* Load data segments */
+	for_each_sg(sgl, sg, tot_dsds, i) {
+		dma_addr_t      sle_dma;
+		cont_a64_entry_t *cont_pkt;
+
+		/* Allocate additional continuation packets? */
+		if (avail_dsds == 0) {
+			/*
+			 * Five DSDs are available in the Continuation
+			 * Type 1 IOCB.
+			 */
+
+			/* Adjust ring index */
+			req->ring_index++;
+			if (req->ring_index == req->length) {
+				req->ring_index = 0;
+				req->ring_ptr = req->ring;
+			} else {
+				req->ring_ptr++;
+			}
+			cont_pkt = (cont_a64_entry_t *)req->ring_ptr;
+			cont_pkt->entry_type = cpu_to_le32(CONTINUE_A64_TYPE);
+
+			cur_dsd = (uint32_t *)cont_pkt->dseg_0_address;
+			avail_dsds = 5;
+		}
+
+		sle_dma = sg_dma_address(sg);
+		*cur_dsd++ = cpu_to_le32(LSD(sle_dma));
+		*cur_dsd++ = cpu_to_le32(MSD(sle_dma));
+		*cur_dsd++ = cpu_to_le32(sg_dma_len(sg));
+		avail_dsds--;
+	}
+
+	/* Set total entry count. */
+	cmd_pkt->entry_count = (uint8_t)req_cnt;
+	wmb();
+
+	/* Adjust ring index. */
+	req->ring_index++;
+	if (req->ring_index == req->length) {
+		req->ring_index = 0;
+		req->ring_ptr = req->ring;
+	} else {
+		req->ring_ptr++;
+	}
+
+	/* Set chip new ring index. */
+	WRT_REG_DWORD(req->req_q_in, req->ring_index);
+
+queuing_error:
+	spin_unlock_irqrestore(&qpair->qp_lock, flags);
+	return rval;
+}
+
+/* Post a command */
+static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
+    struct nvme_fc_remote_port *rport, void *hw_queue_handle,
+    struct nvmefc_fcp_req *fd)
+{
+	fc_port_t *fcport;
+	struct srb_iocb *nvme;
+	struct scsi_qla_host *vha;
+	int rval = QLA_FUNCTION_FAILED;
+	srb_t *sp;
+	struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle;
+	struct nvme_private *priv;
+
+	if (!fd) {
+		ql_log(ql_log_warn, NULL, 0x2134, "NO NVMe FCP reqeust\n");
+		return rval;
+	}
+
+	priv = fd->private;
+	fcport = (fc_port_t *)rport->private;
+	if (!fcport) {
+		ql_log(ql_log_warn, NULL, 0x210e, "No fcport ptr\n");
+		return rval;
+	}
+
+	vha = fcport->vha;
+	if ((!qpair) || (!(fcport->nvme_flag & NVME_FLAG_REGISTERED)))
+		return -EBUSY;
+
+	/* Alloc SRB structure */
+	sp = qla2xxx_get_qpair_sp(qpair, fcport, GFP_ATOMIC);
+	if (!sp)
+		return -EIO;
+
+	atomic_set(&sp->ref_count, 1);
+	init_waitqueue_head(&sp->nvme_ls_waitQ);
+	priv->sp = sp;
+	sp->type = SRB_NVME_CMD;
+	sp->name = "nvme_cmd";
+	sp->done = qla_nvme_sp_done;
+	sp->qpair = qpair;
+	nvme = &sp->u.iocb_cmd;
+	nvme->u.nvme.desc = fd;
+
+	rval = qla2x00_start_nvme_mq(sp);
+	if (rval != QLA_SUCCESS) {
+		ql_log(ql_log_warn, vha, 0x212d,
+		    "qla2x00_start_nvme_mq failed = %d\n", rval);
+		atomic_dec(&sp->ref_count);
+		wake_up(&sp->nvme_ls_waitQ);
+		return -EIO;
+	}
+
+	return rval;
+}
+
+static void qla_nvme_localport_delete(struct nvme_fc_local_port *lport)
+{
+	struct scsi_qla_host *vha = lport->private;
+
+	atomic_dec(&vha->nvme_ref_count);
+	wake_up_all(&vha->nvme_waitQ);
+
+	ql_log(ql_log_info, vha, 0x210f,
+	    "localport delete of %p completed.\n", vha->nvme_local_port);
+	vha->nvme_local_port = NULL;
+}
+
+static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport)
+{
+	fc_port_t *fcport;
+	struct nvme_rport *r_port, *trport;
+
+	fcport = (fc_port_t *)rport->private;
+	fcport->nvme_remote_port = NULL;
+	fcport->nvme_flag &= ~NVME_FLAG_REGISTERED;
+	atomic_dec(&fcport->nvme_ref_count);
+	wake_up_all(&fcport->nvme_waitQ);
+
+	list_for_each_entry_safe(r_port, trport,
+	    &fcport->vha->nvme_rport_list, list) {
+		if (r_port->fcport == fcport) {
+			list_del(&r_port->list);
+			break;
+		}
+	}
+	kfree(r_port);
+
+	ql_log(ql_log_info, fcport->vha, 0x2110,
+	    "remoteport_delete of %p completed.\n", fcport);
+}
+
+static struct nvme_fc_port_template qla_nvme_fc_transport = {
+	.localport_delete = qla_nvme_localport_delete,
+	.remoteport_delete = qla_nvme_remoteport_delete,
+	.create_queue   = qla_nvme_alloc_queue,
+	.delete_queue 	= NULL,
+	.ls_req		= qla_nvme_ls_req,
+	.ls_abort	= qla_nvme_ls_abort,
+	.fcp_io		= qla_nvme_post_cmd,
+	.fcp_abort	= qla_nvme_fcp_abort,
+	.poll_queue	= qla_nvme_poll,
+	.max_hw_queues  = 8,
+	.max_sgl_segments = 128,
+	.max_dif_sgl_segments = 64,
+	.dma_boundary = 0xFFFFFFFF,
+	.local_priv_sz  = 8,
+	.remote_priv_sz = 0,
+	.lsrqst_priv_sz = sizeof(struct nvme_private),
+	.fcprqst_priv_sz = sizeof(struct nvme_private),
+};
+
+#define NVME_ABORT_POLLING_PERIOD    2
+static int qla_nvme_wait_on_command(srb_t *sp)
+{
+	int ret = QLA_SUCCESS;
+
+	wait_event_timeout(sp->nvme_ls_waitQ, (atomic_read(&sp->ref_count) > 1),
+	    NVME_ABORT_POLLING_PERIOD*HZ);
+
+	if (atomic_read(&sp->ref_count) > 1)
+		ret = QLA_FUNCTION_FAILED;
+
+	return ret;
+}
+
+static int qla_nvme_wait_on_rport_del(fc_port_t *fcport)
+{
+	int ret = QLA_SUCCESS;
+
+	wait_event_timeout(fcport->nvme_waitQ,
+	    atomic_read(&fcport->nvme_ref_count),
+	    NVME_ABORT_POLLING_PERIOD*HZ);
+
+	if (atomic_read(&fcport->nvme_ref_count)) {
+		ret = QLA_FUNCTION_FAILED;
+		ql_log(ql_log_info, fcport->vha, 0x2111,
+		    "timed out waiting for fcport=%p to delete\n", fcport);
+	}
+
+	return ret;
+}
+
+void qla_nvme_abort(struct qla_hw_data *ha, srb_t *sp)
+{
+	int rval;
+
+	rval = ha->isp_ops->abort_command(sp);
+	if (!rval) {
+		if (!qla_nvme_wait_on_command(sp))
+			ql_log(ql_log_warn, NULL, 0x2112,
+			    "nvme_wait_on_comand timed out waiting on sp=%p\n",
+			    sp);
+	}
+}
+
+static void qla_nvme_abort_all(fc_port_t *fcport)
+{
+	int que, cnt;
+	unsigned long flags;
+	srb_t *sp;
+	struct qla_hw_data *ha = fcport->vha->hw;
+	struct req_que *req;
+
+	spin_lock_irqsave(&ha->hardware_lock, flags);
+	for (que = 0; que < ha->max_req_queues; que++) {
+		req = ha->req_q_map[que];
+		if (!req)
+			continue;
+		if (!req->outstanding_cmds)
+			continue;
+		for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
+			sp = req->outstanding_cmds[cnt];
+			if ((sp) && ((sp->type == SRB_NVME_CMD) ||
+			    (sp->type == SRB_NVME_LS)) &&
+				(sp->fcport == fcport)) {
+				atomic_inc(&sp->ref_count);
+				spin_unlock_irqrestore(&ha->hardware_lock,
+				    flags);
+				qla_nvme_abort(ha, sp);
+				spin_lock_irqsave(&ha->hardware_lock, flags);
+				req->outstanding_cmds[cnt] = NULL;
+				sp->done(sp, 1);
+			}
+		}
+	}
+	spin_unlock_irqrestore(&ha->hardware_lock, flags);
+}
+
+static void qla_nvme_unregister_remote_port(struct work_struct *work)
+{
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	struct fc_port *fcport = container_of(work, struct fc_port,
+	    nvme_del_work);
+	struct nvme_rport *rport, *trport;
+
+	list_for_each_entry_safe(rport, trport,
+	    &fcport->vha->nvme_rport_list, list) {
+		if (rport->fcport == fcport) {
+			ql_log(ql_log_info, fcport->vha, 0x2113,
+			    "%s: fcport=%p\n", __func__, fcport);
+			nvme_fc_unregister_remoteport(
+			    fcport->nvme_remote_port);
+		}
+	}
+#endif
+}
+
+void qla_nvme_delete(scsi_qla_host_t *vha)
+{
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	struct nvme_rport *rport, *trport;
+	fc_port_t *fcport;
+	int nv_ret;
+
+	list_for_each_entry_safe(rport, trport, &vha->nvme_rport_list, list) {
+		fcport = rport->fcport;
+
+		ql_log(ql_log_info, fcport->vha, 0x2114, "%s: fcport=%p\n",
+		    __func__, fcport);
+
+		nvme_fc_unregister_remoteport(fcport->nvme_remote_port);
+		qla_nvme_wait_on_rport_del(fcport);
+		qla_nvme_abort_all(fcport);
+	}
+
+	if (vha->nvme_local_port) {
+		nv_ret = nvme_fc_unregister_localport(vha->nvme_local_port);
+		if (nv_ret == 0)
+			ql_log(ql_log_info, vha, 0x2116,
+			    "unregistered localport=%p\n",
+			    vha->nvme_local_port);
+		else
+			ql_log(ql_log_info, vha, 0x2115,
+			    "Unregister of localport failed\n");
+	}
+#endif
+}
+
+void qla_nvme_register_hba(scsi_qla_host_t *vha)
+{
+#if (IS_ENABLED(CONFIG_NVME_FC))
+	struct nvme_fc_port_template *tmpl;
+	struct qla_hw_data *ha;
+	struct nvme_fc_port_info pinfo;
+	int ret;
+
+	ha = vha->hw;
+	tmpl = &qla_nvme_fc_transport;
+
+	WARN_ON(vha->nvme_local_port);
+	WARN_ON(ha->max_req_queues < 3);
+
+	qla_nvme_fc_transport.max_hw_queues =
+	    min((uint8_t)(qla_nvme_fc_transport.max_hw_queues),
+		(uint8_t)(ha->max_req_queues - 2));
+
+	pinfo.node_name = wwn_to_u64(vha->node_name);
+	pinfo.port_name = wwn_to_u64(vha->port_name);
+	pinfo.port_role = FC_PORT_ROLE_NVME_INITIATOR;
+	pinfo.port_id = vha->d_id.b24;
+
+	ql_log(ql_log_info, vha, 0xffff,
+	    "register_localport: host-traddr=pn-0x%llx:nn-0x%llx on portID:%x\n",
+	    pinfo.port_name, pinfo.node_name, pinfo.port_id);
+	qla_nvme_fc_transport.dma_boundary = vha->host->dma_boundary;
+
+	ret = nvme_fc_register_localport(&pinfo, tmpl,
+	    get_device(&ha->pdev->dev), &vha->nvme_local_port);
+	if (ret) {
+		ql_log(ql_log_warn, vha, 0xffff,
+		    "register_localport failed: ret=%x\n", ret);
+		return;
+	}
+	atomic_set(&vha->nvme_ref_count, 1);
+	vha->nvme_local_port->private = vha;
+	init_waitqueue_head(&vha->nvme_waitQ);
+#endif
+}
diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h
new file mode 100644
index 000000000000..dfe56f207b28
--- /dev/null
+++ b/drivers/scsi/qla2xxx/qla_nvme.h
@@ -0,0 +1,132 @@
+/*
+ * QLogic Fibre Channel HBA Driver
+ * Copyright (c)  2003-2017 QLogic Corporation
+ *
+ * See LICENSE.qla2xxx for copyright and licensing details.
+ */
+#ifndef __QLA_NVME_H
+#define __QLA_NVME_H
+
+#include <linux/blk-mq.h>
+#include <uapi/scsi/fc/fc_fs.h>
+#include <uapi/scsi/fc/fc_els.h>
+#include <linux/nvme-fc-driver.h>
+
+#define NVME_ATIO_CMD_OFF 32
+#define NVME_FIRST_PACKET_CMDLEN (64 - NVME_ATIO_CMD_OFF)
+#define Q2T_NVME_NUM_TAGS 2048
+#define QLA_MAX_FC_SEGMENTS 64
+
+struct srb;
+struct nvme_private {
+	struct srb	*sp;
+	struct nvmefc_ls_req *fd;
+	struct work_struct ls_work;
+	int comp_status;
+};
+
+struct nvme_rport {
+	struct nvme_fc_port_info req;
+	struct list_head list;
+	struct fc_port *fcport;
+};
+
+#define COMMAND_NVME    0x88            /* Command Type FC-NVMe IOCB */
+struct cmd_nvme {
+	uint8_t entry_type;             /* Entry type. */
+	uint8_t entry_count;            /* Entry count. */
+	uint8_t sys_define;             /* System defined. */
+	uint8_t entry_status;           /* Entry Status. */
+
+	uint32_t handle;                /* System handle. */
+	uint16_t nport_handle;          /* N_PORT handle. */
+	uint16_t timeout;               /* Command timeout. */
+
+	uint16_t dseg_count;            /* Data segment count. */
+	uint16_t nvme_rsp_dsd_len;      /* NVMe RSP DSD length */
+
+	uint64_t rsvd;
+
+	uint16_t control_flags;         /* Control Flags */
+#define CF_NVME_ENABLE                  BIT_9
+#define CF_DIF_SEG_DESCR_ENABLE         BIT_3
+#define CF_DATA_SEG_DESCR_ENABLE        BIT_2
+#define CF_READ_DATA                    BIT_1
+#define CF_WRITE_DATA                   BIT_0
+
+	uint16_t nvme_cmnd_dseg_len;             /* Data segment length. */
+	uint32_t nvme_cmnd_dseg_address[2];      /* Data segment address. */
+	uint32_t nvme_rsp_dseg_address[2];       /* Data segment address. */
+
+	uint32_t byte_count;            /* Total byte count. */
+
+	uint8_t port_id[3];             /* PortID of destination port. */
+	uint8_t vp_index;
+
+	uint32_t nvme_data_dseg_address[2];      /* Data segment address. */
+	uint32_t nvme_data_dseg_len;             /* Data segment length. */
+};
+
+#define PT_LS4_REQUEST 0x89	/* Link Service pass-through IOCB (request) */
+struct pt_ls4_request {
+	uint8_t entry_type;
+	uint8_t entry_count;
+	uint8_t sys_define;
+	uint8_t entry_status;
+	uint32_t handle;
+	uint16_t status;
+	uint16_t nport_handle;
+	uint16_t tx_dseg_count;
+	uint8_t  vp_index;
+	uint8_t  rsvd;
+	uint16_t timeout;
+	uint16_t control_flags;
+#define CF_LS4_SHIFT		13
+#define CF_LS4_ORIGINATOR	0
+#define CF_LS4_RESPONDER	1
+#define CF_LS4_RESPONDER_TERM	2
+
+	uint16_t rx_dseg_count;
+	uint16_t rsvd2;
+	uint32_t exchange_address;
+	uint32_t rsvd3;
+	uint32_t rx_byte_count;
+	uint32_t tx_byte_count;
+	uint32_t dseg0_address[2];
+	uint32_t dseg0_len;
+	uint32_t dseg1_address[2];
+	uint32_t dseg1_len;
+};
+
+#define PT_LS4_UNSOL 0x56	/* pass-up unsolicited rec FC-NVMe request */
+struct pt_ls4_rx_unsol {
+	uint8_t entry_type;
+	uint8_t entry_count;
+	uint16_t rsvd0;
+	uint16_t rsvd1;
+	uint8_t vp_index;
+	uint8_t rsvd2;
+	uint16_t rsvd3;
+	uint16_t nport_handle;
+	uint16_t frame_size;
+	uint16_t rsvd4;
+	uint32_t exchange_address;
+	uint8_t d_id[3];
+	uint8_t r_ctl;
+	uint8_t s_id[3];
+	uint8_t cs_ctl;
+	uint8_t f_ctl[3];
+	uint8_t type;
+	uint16_t seq_cnt;
+	uint8_t df_ctl;
+	uint8_t seq_id;
+	uint16_t rx_id;
+	uint16_t ox_id;
+	uint32_t param;
+	uint32_t desc0;
+#define PT_LS4_PAYLOAD_OFFSET 0x2c
+#define PT_LS4_FIRST_PACKET_LEN 20
+	uint32_t desc_len;
+	uint32_t payload[3];
+};
+#endif
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index ef5211fd2154..3b75d760b99e 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -120,7 +120,11 @@ MODULE_PARM_DESC(ql2xmaxqdepth,
 		"Maximum queue depth to set for each LUN. "
 		"Default is 32.");
 
+#if (IS_ENABLED(CONFIG_NVME_FC))
+int ql2xenabledif;
+#else
 int ql2xenabledif = 2;
+#endif
 module_param(ql2xenabledif, int, S_IRUGO);
 MODULE_PARM_DESC(ql2xenabledif,
 		" Enable T10-CRC-DIF:\n"
@@ -129,6 +133,16 @@ MODULE_PARM_DESC(ql2xenabledif,
 		"  1 -- Enable DIF for all types\n"
 		"  2 -- Enable DIF for all types, except Type 0.\n");
 
+#if (IS_ENABLED(CONFIG_NVME_FC))
+int ql2xnvmeenable = 1;
+#else
+int ql2xnvmeenable;
+#endif
+module_param(ql2xnvmeenable, int, 0644);
+MODULE_PARM_DESC(ql2xnvmeenable,
+    "Enables NVME support. "
+    "0 - no NVMe.  Default is Y");
+
 int ql2xenablehba_err_chk = 2;
 module_param(ql2xenablehba_err_chk, int, S_IRUGO|S_IWUSR);
 MODULE_PARM_DESC(ql2xenablehba_err_chk,
@@ -267,6 +281,7 @@ static void qla2x00_clear_drv_active(struct qla_hw_data *);
 static void qla2x00_free_device(scsi_qla_host_t *);
 static void qla83xx_disable_laser(scsi_qla_host_t *vha);
 static int qla2xxx_map_queues(struct Scsi_Host *shost);
+static void qla2x00_destroy_deferred_work(struct qla_hw_data *);
 
 struct scsi_host_template qla2xxx_driver_template = {
 	.module			= THIS_MODULE,
@@ -695,7 +710,7 @@ qla2x00_sp_free_dma(void *ptr)
 	}
 
 end:
-	if (sp->type != SRB_NVME_CMD) {
+	if ((sp->type != SRB_NVME_CMD) && (sp->type != SRB_NVME_LS)) {
 		CMD_SP(cmd) = NULL;
 		qla2x00_rel_sp(sp);
 	}
@@ -1700,15 +1715,23 @@ qla2x00_abort_all_cmds(scsi_qla_host_t *vha, int res)
 			if (sp) {
 				req->outstanding_cmds[cnt] = NULL;
 				if (sp->cmd_type == TYPE_SRB) {
-					/*
-					 * Don't abort commands in adapter
-					 * during EEH recovery as it's not
-					 * accessible/responding.
-					 */
-					if (GET_CMD_SP(sp) &&
+					if ((sp->type == SRB_NVME_CMD) ||
+					    (sp->type == SRB_NVME_LS)) {
+						sp_get(sp);
+						spin_unlock_irqrestore(
+						    &ha->hardware_lock, flags);
+						qla_nvme_abort(ha, sp);
+						spin_lock_irqsave(
+						    &ha->hardware_lock, flags);
+					} else if (GET_CMD_SP(sp) &&
 					    !ha->flags.eeh_busy &&
 					    (sp->type == SRB_SCSI_CMD)) {
 						/*
+						 * Don't abort commands in
+						 * adapter during EEH
+						 * recovery as it's not
+						 * accessible/responding.
+						 *
 						 * Get a reference to the sp
 						 * and drop the lock. The
 						 * reference ensures this
@@ -3534,6 +3557,9 @@ qla2x00_remove_one(struct pci_dev *pdev)
 		return;
 
 	set_bit(UNLOADING, &base_vha->dpc_flags);
+
+	qla_nvme_delete(base_vha);
+
 	dma_free_coherent(&ha->pdev->dev,
 		base_vha->gnl.size, base_vha->gnl.l, base_vha->gnl.ldma);
 
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
                   ` (2 preceding siblings ...)
  2017-06-21 20:48 ` [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration Madhani, Himanshu
@ 2017-06-21 20:48 ` Madhani, Himanshu
  2017-06-22  6:33   ` Hannes Reinecke
  2017-06-22  9:51   ` Johannes Thumshirn
  2017-06-21 20:48 ` [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration Madhani, Himanshu
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Duane Grigsby <duane.grigsby@cavium.com>

This patch adds switch command support for FC-4 type of FC-NVMe (0x28)
for resgistering HBA port to the management server. RFT_ID command is
used to register FC-4 type of 0x28 and RFF_ID is used to register
FC-4 features bits for FC-NVMe port.

Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Reviewed-By: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/qla2xxx/qla_def.h  |   1 +
 drivers/scsi/qla2xxx/qla_gbl.h  |   6 +-
 drivers/scsi/qla2xxx/qla_gs.c   | 118 +++++++++++++++++++++++++++++++++++++++-
 drivers/scsi/qla2xxx/qla_init.c |  11 +++-
 4 files changed, 131 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index 0dbcb84011b0..0730b10b4280 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -2867,6 +2867,7 @@ struct ct_sns_rsp {
 		} gpsc;
 
 #define GFF_FCP_SCSI_OFFSET	7
+#define GFF_NVME_OFFSET		23 /* type = 28h */
 		struct {
 			uint8_t fc4_features[128];
 		} gff_id;
diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
index c6af45f7d5d6..cadb6e3baacc 100644
--- a/drivers/scsi/qla2xxx/qla_gbl.h
+++ b/drivers/scsi/qla2xxx/qla_gbl.h
@@ -18,6 +18,7 @@ extern void qla_nvme_delete(scsi_qla_host_t *);
 extern void qla_nvme_abort(struct qla_hw_data *, srb_t *sp);
 extern void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *,
     struct req_que *);
+extern void qla24xx_async_gffid_sp_done(void *, int);
 
 /*
  * Global Function Prototypes in qla_init.c source file.
@@ -618,7 +619,7 @@ extern int qla2x00_gpn_id(scsi_qla_host_t *, sw_info_t *);
 extern int qla2x00_gnn_id(scsi_qla_host_t *, sw_info_t *);
 extern void qla2x00_gff_id(scsi_qla_host_t *, sw_info_t *);
 extern int qla2x00_rft_id(scsi_qla_host_t *);
-extern int qla2x00_rff_id(scsi_qla_host_t *);
+extern int qla2x00_rff_id(scsi_qla_host_t *, u8);
 extern int qla2x00_rnn_id(scsi_qla_host_t *);
 extern int qla2x00_rsnn_nn(scsi_qla_host_t *);
 extern void *qla2x00_prep_ms_fdmi_iocb(scsi_qla_host_t *, uint32_t, uint32_t);
@@ -644,7 +645,8 @@ void qla24xx_handle_gpnid_event(scsi_qla_host_t *, struct event_arg *);
 int qla24xx_post_gpsc_work(struct scsi_qla_host *, fc_port_t *);
 int qla24xx_async_gpsc(scsi_qla_host_t *, fc_port_t *);
 int qla2x00_mgmt_svr_login(scsi_qla_host_t *);
-
+void qla24xx_handle_gffid_event(scsi_qla_host_t *vha, struct event_arg *ea);
+int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport);
 /*
  * Global Function Prototypes in qla_attr.c source file.
  */
diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
index 540fec524ccb..c91478529b51 100644
--- a/drivers/scsi/qla2xxx/qla_gs.c
+++ b/drivers/scsi/qla2xxx/qla_gs.c
@@ -550,6 +550,8 @@ qla2x00_rft_id(scsi_qla_host_t *vha)
 
 	ct_req->req.rft_id.fc4_types[2] = 0x01;		/* FCP-3 */
 
+	if (vha->flags.nvme_enabled)
+		ct_req->req.rft_id.fc4_types[6] = 1;    /* NVMe type 28h */
 	/* Execute MS IOCB */
 	rval = qla2x00_issue_iocb(vha, ha->ms_iocb, ha->ms_iocb_dma,
 	    sizeof(ms_iocb_entry_t));
@@ -575,7 +577,7 @@ qla2x00_rft_id(scsi_qla_host_t *vha)
  * Returns 0 on success.
  */
 int
-qla2x00_rff_id(scsi_qla_host_t *vha)
+qla2x00_rff_id(scsi_qla_host_t *vha, u8 type)
 {
 	int		rval;
 	struct qla_hw_data *ha = vha->hw;
@@ -613,7 +615,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha)
 
 	qlt_rff_id(vha, ct_req);
 
-	ct_req->req.rff_id.fc4_type = 0x08;		/* SCSI - FCP */
+	ct_req->req.rff_id.fc4_type = type;		/* SCSI - FCP */
 
 	/* Execute MS IOCB */
 	rval = qla2x00_issue_iocb(vha, ha->ms_iocb, ha->ms_iocb_dma,
@@ -2754,6 +2756,10 @@ qla2x00_gff_id(scsi_qla_host_t *vha, sw_info_t *list)
 				list[i].fc4_type = FC4_TYPE_FCP_SCSI;
 			else
 				list[i].fc4_type = FC4_TYPE_OTHER;
+
+			list[i].fc4f_nvme =
+			    ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET];
+			list[i].fc4f_nvme &= 0xf;
 		}
 
 		/* Last device exit. */
@@ -3305,3 +3311,111 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
 done:
 	return rval;
 }
+
+void qla24xx_handle_gffid_event(scsi_qla_host_t *vha, struct event_arg *ea)
+{
+       fc_port_t *fcport = ea->fcport;
+
+       qla24xx_post_gnl_work(vha, fcport);
+}
+
+void qla24xx_async_gffid_sp_done(void *s, int res)
+{
+       struct srb *sp = s;
+       struct scsi_qla_host *vha = sp->vha;
+       fc_port_t *fcport = sp->fcport;
+       struct ct_sns_rsp *ct_rsp;
+       struct event_arg ea;
+
+       ql_dbg(ql_dbg_disc, vha, 0x2133,
+	   "Async done-%s res %x ID %x. %8phC\n",
+	   sp->name, res, fcport->d_id.b24, fcport->port_name);
+
+       fcport->flags &= ~FCF_ASYNC_SENT;
+       ct_rsp = &fcport->ct_desc.ct_sns->p.rsp;
+       /*
+	* FC-GS-7, 5.2.3.12 FC-4 Features - format
+	* The format of the FC-4 Features object, as defined by the FC-4,
+	* Shall be an array of 4-bit values, one for each type code value
+	*/
+       if (!res) {
+	       if (ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET] & 0xf) {
+		       /* w1 b00:03 */
+		       fcport->fc4_type =
+			   ct_rsp->rsp.gff_id.fc4_features[GFF_FCP_SCSI_OFFSET];
+		       fcport->fc4_type &= 0xf;
+	       }
+
+	       if (ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET] & 0xf) {
+		       /* w5 [00:03]/28h */
+		       fcport->fc4f_nvme =
+			   ct_rsp->rsp.gff_id.fc4_features[GFF_NVME_OFFSET];
+		       fcport->fc4f_nvme &= 0xf;
+	       }
+       }
+
+       memset(&ea, 0, sizeof(ea));
+       ea.sp = sp;
+       ea.fcport = sp->fcport;
+       ea.rc = res;
+       ea.event = FCME_GFFID_DONE;
+
+       qla2x00_fcport_event_handler(vha, &ea);
+       sp->free(sp);
+}
+
+/* Get FC4 Feature with Nport ID. */
+int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+{
+	int rval = QLA_FUNCTION_FAILED;
+	struct ct_sns_req       *ct_req;
+	srb_t *sp;
+
+	if (!vha->flags.online)
+		return rval;
+
+	sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+	if (!sp)
+		return rval;
+
+	fcport->flags |= FCF_ASYNC_SENT;
+	sp->type = SRB_CT_PTHRU_CMD;
+	sp->name = "gffid";
+	sp->gen1 = fcport->rscn_gen;
+	sp->gen2 = fcport->login_gen;
+
+	qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+
+	/* CT_IU preamble  */
+	ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GFF_ID_CMD,
+	    GFF_ID_RSP_SIZE);
+
+	ct_req->req.gff_id.port_id[0] = fcport->d_id.b.domain;
+	ct_req->req.gff_id.port_id[1] = fcport->d_id.b.area;
+	ct_req->req.gff_id.port_id[2] = fcport->d_id.b.al_pa;
+
+	sp->u.iocb_cmd.u.ctarg.req = fcport->ct_desc.ct_sns;
+	sp->u.iocb_cmd.u.ctarg.req_dma = fcport->ct_desc.ct_sns_dma;
+	sp->u.iocb_cmd.u.ctarg.rsp = fcport->ct_desc.ct_sns;
+	sp->u.iocb_cmd.u.ctarg.rsp_dma = fcport->ct_desc.ct_sns_dma;
+	sp->u.iocb_cmd.u.ctarg.req_size = GFF_ID_REQ_SIZE;
+	sp->u.iocb_cmd.u.ctarg.rsp_size = GFF_ID_RSP_SIZE;
+	sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+	sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+	sp->done = qla24xx_async_gffid_sp_done;
+
+	rval = qla2x00_start_sp(sp);
+	if (rval != QLA_SUCCESS)
+		goto done_free_sp;
+
+	ql_dbg(ql_dbg_disc, vha, 0x2132,
+	    "Async-%s hdl=%x  %8phC.\n", sp->name,
+	    sp->handle, fcport->port_name);
+
+	return rval;
+done_free_sp:
+	sp->free(sp);
+	fcport->flags &= ~FCF_ASYNC_SENT;
+	return rval;
+}
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 7286a80f796c..227c18426c6d 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -1244,6 +1244,9 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
 	case FCME_GPNID_DONE:
 		qla24xx_handle_gpnid_event(vha, ea);
 		break;
+	case FCME_GFFID_DONE:
+		qla24xx_handle_gffid_event(vha, ea);
+		break;
 	case FCME_DELETE_DONE:
 		qla24xx_handle_delete_done_event(vha, ea);
 		break;
@@ -4626,7 +4629,7 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha)
 				    &vha->dpc_flags))
 					break;
 			}
-			if (qla2x00_rff_id(vha)) {
+			if (qla2x00_rff_id(vha, FC4_TYPE_FCP_SCSI)) {
 				/* EMPTY */
 				ql_dbg(ql_dbg_disc, vha, 0x209a,
 				    "Register FC-4 Features failed.\n");
@@ -4634,6 +4637,12 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha)
 				    &vha->dpc_flags))
 					break;
 			}
+			if (vha->flags.nvme_enabled) {
+				if (qla2x00_rff_id(vha, FC_TYPE_NVME)) {
+					ql_dbg(ql_dbg_disc, vha, 0x2049,
+					    "Register NVME FC Type Features failed.\n");
+				}
+			}
 			if (qla2x00_rnn_id(vha)) {
 				/* EMPTY */
 				ql_dbg(ql_dbg_disc, vha, 0x2104,
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
                   ` (3 preceding siblings ...)
  2017-06-21 20:48 ` [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server Madhani, Himanshu
@ 2017-06-21 20:48 ` Madhani, Himanshu
  2017-06-22  6:33   ` Hannes Reinecke
  2017-06-22  9:52   ` Johannes Thumshirn
  2017-06-21 20:48 ` [PATCH v2 6/6] qla2xxx: Update Driver version to 10.00.00.00-k Madhani, Himanshu
  2017-06-28  1:49 ` [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Martin K. Petersen
  6 siblings, 2 replies; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Duane Grigsby <duane.grigsby@cavium.com>

Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/qla2xxx/qla_gs.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
index c91478529b51..b323a7c71eda 100644
--- a/drivers/scsi/qla2xxx/qla_gs.c
+++ b/drivers/scsi/qla2xxx/qla_gs.c
@@ -2166,6 +2166,13 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha)
 	    eiter->a.fc4_types[2],
 	    eiter->a.fc4_types[1]);
 
+	if (vha->flags.nvme_enabled) {
+		eiter->a.fc4_types[6] = 1;	/* NVMe type 28h */
+		ql_dbg(ql_dbg_disc, vha, 0x211f,
+		    "NVME FC4 Type = %02x 0x0 0x0 0x0 0x0 0x0.\n",
+		    eiter->a.fc4_types[6]);
+	}
+
 	/* Supported speed. */
 	eiter = entries + size;
 	eiter->type = cpu_to_be16(FDMI_PORT_SUPPORT_SPEED);
@@ -2363,6 +2370,15 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha)
 	    "Port Active FC4 Type = %02x %02x.\n",
 	    eiter->a.port_fc4_type[2], eiter->a.port_fc4_type[1]);
 
+	if (vha->flags.nvme_enabled) {
+		eiter->a.port_fc4_type[4] = 0;
+		eiter->a.port_fc4_type[5] = 0;
+		eiter->a.port_fc4_type[6] = 1;	/* NVMe type 28h */
+		ql_dbg(ql_dbg_disc, vha, 0x2120,
+		    "NVME Port Active FC4 Type = %02x 0x0 0x0 0x0 0x0 0x0.\n",
+		    eiter->a.port_fc4_type[6]);
+	}
+
 	/* Port State */
 	eiter = entries + size;
 	eiter->type = cpu_to_be16(FDMI_PORT_STATE);
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 6/6] qla2xxx: Update Driver version to 10.00.00.00-k
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
                   ` (4 preceding siblings ...)
  2017-06-21 20:48 ` [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration Madhani, Himanshu
@ 2017-06-21 20:48 ` Madhani, Himanshu
  2017-06-22  6:33   ` Hannes Reinecke
  2017-06-28  1:49 ` [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Martin K. Petersen
  6 siblings, 1 reply; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-21 20:48 UTC (permalink / raw)
  To: martin.petersen
  Cc: himanshu.madhani, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

From: Himanshu Madhani <himanshu.madhani@cavium.com>

Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: James Smart <james.smart@broadcom.com>
---
 drivers/scsi/qla2xxx/qla_version.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h
index dcbb9bb05e99..005a378f7fab 100644
--- a/drivers/scsi/qla2xxx/qla_version.h
+++ b/drivers/scsi/qla2xxx/qla_version.h
@@ -7,9 +7,9 @@
 /*
  * Driver version
  */
-#define QLA2XXX_VERSION      "9.01.00.00-k"
+#define QLA2XXX_VERSION      "10.00.00.00-k"
 
-#define QLA_DRIVER_MAJOR_VER	9
-#define QLA_DRIVER_MINOR_VER	1
+#define QLA_DRIVER_MAJOR_VER	10
+#define QLA_DRIVER_MINOR_VER	0
 #define QLA_DRIVER_PATCH_VER	0
 #define QLA_DRIVER_BETA_VER	0
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling
  2017-06-21 20:48 ` [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling Madhani, Himanshu
@ 2017-06-22  6:28   ` Hannes Reinecke
  2017-06-28 21:15   ` James Bottomley
  1 sibling, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2017-06-22  6:28 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: darren.trapp, linux-nvme, linux-scsi, giridhar.malavali

On 06/21/2017 10:48 PM, Madhani, Himanshu wrote:
> From: Duane Grigsby <duane.grigsby@cavium.com>
> 
> Added logic to change the login process into an optional PRIL
> step for FC-NVMe ports as a separate operation, such that we can
> change type to 0x28 (NVMe).
> 
> Currently, the driver performs the PLOGI/PRLI together as one
> operation, but if the discovered port is an NVMe port then we
> first issue the PLOGI and then we issue the PRLI. Also, the
> fabric discovery logic was changed to mark each discovered FC
> NVMe port, so that we can register them with the FC-NVMe transport
> later.
> 
> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
> Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> ---
>  drivers/scsi/qla2xxx/qla_dbg.c    |   9 +-
>  drivers/scsi/qla2xxx/qla_def.h    |  30 ++++++-
>  drivers/scsi/qla2xxx/qla_fw.h     |  13 ++-
>  drivers/scsi/qla2xxx/qla_gbl.h    |   1 +
>  drivers/scsi/qla2xxx/qla_init.c   | 168 ++++++++++++++++++++++++++++++++++++--
>  drivers/scsi/qla2xxx/qla_iocb.c   |  21 +++++
>  drivers/scsi/qla2xxx/qla_mbx.c    |  33 +++++---
>  drivers/scsi/qla2xxx/qla_os.c     |   4 +
>  drivers/scsi/qla2xxx/qla_target.c |   4 +-
>  9 files changed, 256 insertions(+), 27 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/6] qla2xxx: Add FC-NVMe command handling
  2017-06-21 20:48 ` [PATCH v2 2/6] qla2xxx: Add FC-NVMe command handling Madhani, Himanshu
@ 2017-06-22  6:28   ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2017-06-22  6:28 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: darren.trapp, linux-nvme, linux-scsi, giridhar.malavali

On 06/21/2017 10:48 PM, Madhani, Himanshu wrote:
> From: Duane Grigsby <duane.grigsby@cavium.com>
> 
> This patch adds logic to  handle the completion of
> FC-NVMe commands and creates a sub-command in the SRB
> command structure to manage NVMe commands.
> 
> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
> Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> ---
>  drivers/scsi/qla2xxx/qla_def.h | 17 +++++++++
>  drivers/scsi/qla2xxx/qla_fw.h  | 22 ++++++++++--
>  drivers/scsi/qla2xxx/qla_isr.c | 79 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/qla2xxx/qla_os.c  | 18 ++++++++--
>  4 files changed, 131 insertions(+), 5 deletions(-)
> Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration
  2017-06-21 20:48 ` [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration Madhani, Himanshu
@ 2017-06-22  6:32   ` Hannes Reinecke
  2017-06-22  9:46   ` Johannes Thumshirn
  1 sibling, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2017-06-22  6:32 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: darren.trapp, linux-nvme, linux-scsi, giridhar.malavali

On 06/21/2017 10:48 PM, Madhani, Himanshu wrote:
> From: Duane Grigsby <duane.grigsby@cavium.com>
> 
> This code provides the interfaces to register remote and local ports
> of FC4 type 0x28 with the FC-NVMe transport and transports the
> requests (FC-NVMe FC link services and FC-NVMe commands IUs) to the
> fabric. It also provides the support for allocating h/w queues and
> aborting FC-NVMe FC requests.
> 
> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
> Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
> ---
>  drivers/scsi/qla2xxx/Makefile   |   2 +-
>  drivers/scsi/qla2xxx/qla_dbg.c  |   2 +-
>  drivers/scsi/qla2xxx/qla_def.h  |   6 +
>  drivers/scsi/qla2xxx/qla_gbl.h  |  11 +
>  drivers/scsi/qla2xxx/qla_init.c |   8 +
>  drivers/scsi/qla2xxx/qla_iocb.c |  36 ++
>  drivers/scsi/qla2xxx/qla_isr.c  |  19 +
>  drivers/scsi/qla2xxx/qla_mbx.c  |  21 ++
>  drivers/scsi/qla2xxx/qla_nvme.c | 756 ++++++++++++++++++++++++++++++++++++++++
>  drivers/scsi/qla2xxx/qla_nvme.h | 132 +++++++
>  drivers/scsi/qla2xxx/qla_os.c   |  40 ++-
>  11 files changed, 1024 insertions(+), 9 deletions(-)
>  create mode 100644 drivers/scsi/qla2xxx/qla_nvme.c
>  create mode 100644 drivers/scsi/qla2xxx/qla_nvme.h
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server
  2017-06-21 20:48 ` [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server Madhani, Himanshu
@ 2017-06-22  6:33   ` Hannes Reinecke
  2017-06-22  9:51   ` Johannes Thumshirn
  1 sibling, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2017-06-22  6:33 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: darren.trapp, linux-nvme, linux-scsi, giridhar.malavali

On 06/21/2017 10:48 PM, Madhani, Himanshu wrote:
> From: Duane Grigsby <duane.grigsby@cavium.com>
> 
> This patch adds switch command support for FC-4 type of FC-NVMe (0x28)
> for resgistering HBA port to the management server. RFT_ID command is
> used to register FC-4 type of 0x28 and RFF_ID is used to register
> FC-4 features bits for FC-NVMe port.
> 
> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
> Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
> Reviewed-By: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/qla2xxx/qla_def.h  |   1 +
>  drivers/scsi/qla2xxx/qla_gbl.h  |   6 +-
>  drivers/scsi/qla2xxx/qla_gs.c   | 118 +++++++++++++++++++++++++++++++++++++++-
>  drivers/scsi/qla2xxx/qla_init.c |  11 +++-
>  4 files changed, 131 insertions(+), 5 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration
  2017-06-21 20:48 ` [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration Madhani, Himanshu
@ 2017-06-22  6:33   ` Hannes Reinecke
  2017-06-22  9:52   ` Johannes Thumshirn
  1 sibling, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2017-06-22  6:33 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: darren.trapp, linux-nvme, linux-scsi, giridhar.malavali

On 06/21/2017 10:48 PM, Madhani, Himanshu wrote:
> From: Duane Grigsby <duane.grigsby@cavium.com>
> 
> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
> Signed-off-by: Giridhar Malavali <giridhar.malavali@cavium.com>
> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> Reviewed-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/qla2xxx/qla_gs.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)

Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 6/6] qla2xxx: Update Driver version to 10.00.00.00-k
  2017-06-21 20:48 ` [PATCH v2 6/6] qla2xxx: Update Driver version to 10.00.00.00-k Madhani, Himanshu
@ 2017-06-22  6:33   ` Hannes Reinecke
  0 siblings, 0 replies; 22+ messages in thread
From: Hannes Reinecke @ 2017-06-22  6:33 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: darren.trapp, linux-nvme, linux-scsi, giridhar.malavali

On 06/21/2017 10:48 PM, Madhani, Himanshu wrote:
> From: Himanshu Madhani <himanshu.madhani@cavium.com>
> 
> Signed-off-by: Himanshu Madhani <himanshu.madhani@cavium.com>
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> Reviewed-by: James Smart <james.smart@broadcom.com>
> ---
>  drivers/scsi/qla2xxx/qla_version.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h
> index dcbb9bb05e99..005a378f7fab 100644
> --- a/drivers/scsi/qla2xxx/qla_version.h
> +++ b/drivers/scsi/qla2xxx/qla_version.h
> @@ -7,9 +7,9 @@
>  /*
>   * Driver version
>   */
> -#define QLA2XXX_VERSION      "9.01.00.00-k"
> +#define QLA2XXX_VERSION      "10.00.00.00-k"
>  
> -#define QLA_DRIVER_MAJOR_VER	9
> -#define QLA_DRIVER_MINOR_VER	1
> +#define QLA_DRIVER_MAJOR_VER	10
> +#define QLA_DRIVER_MINOR_VER	0
>  #define QLA_DRIVER_PATCH_VER	0
>  #define QLA_DRIVER_BETA_VER	0
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration
  2017-06-21 20:48 ` [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration Madhani, Himanshu
  2017-06-22  6:32   ` Hannes Reinecke
@ 2017-06-22  9:46   ` Johannes Thumshirn
       [not found]     ` <2d07d1fd-545b-0308-8a2b-5cfb59cbcf2b@broadcom.com>
  2017-06-23  3:16     ` Madhani, Himanshu
  1 sibling, 2 replies; 22+ messages in thread
From: Johannes Thumshirn @ 2017-06-22  9:46 UTC (permalink / raw)
  To: Madhani, Himanshu
  Cc: martin.petersen, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

On Wed, Jun 21, 2017 at 01:48:43PM -0700, Madhani, Himanshu wrote:
[...]
> +	wait_queue_head_t nvme_ls_waitQ;

Can you please lower-case the 'Q' in waitQ IFF you have to re-send the series?

[...]
> +	wait_queue_head_t nvme_waitQ;

Ditto

[...]
> +	wait_queue_head_t nvme_waitQ;

And here as well.

> diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
> index 6fbee11c1a18..c6af45f7d5d6 100644
> --- a/drivers/scsi/qla2xxx/qla_gbl.h
> +++ b/drivers/scsi/qla2xxx/qla_gbl.h
> @@ -10,6 +10,16 @@
>  #include <linux/interrupt.h>
>  
>  /*
> + * Global functions prototype in qla_nvme.c source file.
> + */
> +extern void qla_nvme_register_hba(scsi_qla_host_t *);
> +extern int  qla_nvme_register_remote(scsi_qla_host_t *, fc_port_t *);
> +extern void qla_nvme_delete(scsi_qla_host_t *);
> +extern void qla_nvme_abort(struct qla_hw_data *, srb_t *sp);
> +extern void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *,
> +    struct req_que *);

You're still not convinced of the idea of headers, heh ;-)
Especially as you have a qla_nvme.h.

[...]

> +	INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port);
> +	rport = kzalloc(sizeof(*rport), GFP_KERNEL);
> +	if (!rport) {
> +		ql_log(ql_log_warn, vha, 0x2101,
> +		    "%s: unable to alloc memory\n", __func__);

kzalloc() will warn you about a failed allocation, no need to double it.
See also:
http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf

[...]

> +	ret = nvme_fc_register_remoteport(vha->nvme_local_port, &rport->req,
> +	    &fcport->nvme_remote_port);
> +	if (ret) {
> +		ql_log(ql_log_warn, vha, 0x212e,
> +		    "Failed to register remote port. Transport returned %d\n",
> +		    ret);
> +		return ret;
> +	}
> +
> +	fcport->nvme_remote_port->private = fcport;

I think I already said that in the last review, but can you please move the 
fcport->nvme_remote_port->private = fcport;
assingment _above_ the nvme_fc_register_remoteport() call.

[...]

> +	vha = (struct scsi_qla_host *)lport->private;

No need to cast from void *
> +	ql_log(ql_log_info, vha, 0x2104,
> +	    "%s: handle %p, idx =%d, qsize %d\n",
> +	    __func__, handle, qidx, qsize);

Btw, sometime in the future you could change your ql_log() thingies to the
kernel's dyndebug facility.

[...]

> +	rval = ha->isp_ops->abort_command(sp);
> +	if (rval != QLA_SUCCESS)
> +		ql_log(ql_log_warn, fcport->vha, 0x2125,
> +		    "%s: failed to abort LS command for SP:%p rval=%x\n",
> +		    __func__, sp, rval);
> +
> +	ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
> +	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);

If you insinst in having these two messages ("failed to abort" and "aborted")
can you at least fold it into one print statement.

> +	rval = ha->isp_ops->abort_command(sp);
> +	if (!rval)
> +		ql_log(ql_log_warn, fcport->vha, 0x2127,
> +		    "%s: failed to abort command for SP:%p rval=%x\n",
> +		    __func__, sp, rval);
> +
> +	ql_dbg(ql_dbg_io, fcport->vha, 0x2126,
> +	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);

Ditto.

[...]


> +	/* Setup qpair pointers */
> +	req = qpair->req;
> +	tot_dsds = fd->sg_cnt;
> +
> +	/* Acquire qpair specific lock */
> +	spin_lock_irqsave(&qpair->qp_lock, flags);
> +
> +	/* Check for room in outstanding command list. */
> +	handle = req->current_outstanding_cmd;

I've just seen this in qla2xxx_start_scsi_mq() and
qla2xxx_dif_start_scsi_mq() and was about to send you an RFC patch. But
here it is for completeness in the nvme version as well:

You save a pointer to the req_que from you qpair and _afterwards_ you grab
the qp_lock. What prevents someone from changing the request internals
underneath you?

Like this:

CPU0                               CPU1
req = qpair->req;
                                 qla2xxx_delete_qpair(vha, qpair);
                                 `-> ret = qla25xx_delete_req_que(vha, qpair->req);
spin_lock_irqsave(&qpair->qp_lock, flags);
handle = req->current_outstanding_cmd;

Oh and btw, neither qla2xxx_delete_qpair() nor qla25xx_delete_req_que() grab
the qp_lock.

I think this is something work re-thinking. Maybe you can identify the blocks
accessing struct members which need to be touched under a lock and extract
them into a helper function wich calls lockdep_assert_held(). No must just and
idea.

[...]
> +
> +	/* Load data segments */
> +	for_each_sg(sgl, sg, tot_dsds, i) {

Do you really need the whole loop under a spin_lock_irqsave()? If the sglist
has a lot of entries (i.e. becasue we couldn't cluster it) we're in risk to
trigger a NMI watchdog soft-lockup WARN_ON(). You need to grab the lock when
accessing req's members but the rest of the loop? This applies to
qla24xx_build_scsi_iocbs() for SCSI as well.

[...]

> +	struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle;

Void pointer cast. Someone really should write a coccinelle script to get rid
of 'em.

[...]

> +	/* Alloc SRB structure */
> +	sp = qla2xxx_get_qpair_sp(qpair, fcport, GFP_ATOMIC);
> +	if (!sp)
> +		return -EIO;

__blk_mq_run_hw_queue()
`-> blk_mq_sched_dispatch_requests()
    `-> blk_mq_dispatch_rq_list()
        `-> nvme_fc_queue_rq()
            `-> nvme_fc_start_fcp_op() 
                `-> qla_nvme_post_cmd()
isn't called from an IRQ context and qla2xxx_get_qpair_sp() internally
uses mempool_alloc(). From mempool_alloc()'s documentation:

"Note that due to preallocation, this function *never* fails when called from
process contexts. (it might fail if called from an IRQ context.)"
mm/mempool.c:306

[...]

> +	fcport = (fc_port_t *)rport->private;
Void cast.

[...]
> +	rval = ha->isp_ops->abort_command(sp);
> +	if (!rval) {
> +		if (!qla_nvme_wait_on_command(sp))

        if (!rval && !qla_nvme_wait_on_command(sp))

[...]

> +		for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
> +			sp = req->outstanding_cmds[cnt];
> +			if ((sp) && ((sp->type == SRB_NVME_CMD) ||
                            ^ parenthesis
> +			    (sp->type == SRB_NVME_LS)) &&
> +				(sp->fcport == fcport)) {
                                ^ parenthesis
                                 
[...]

> diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h
[...]

void qla_nvme_register_hba(scsi_qla_host_t *);
int  qla_nvme_register_remote(scsi_qla_host_t *, fc_port_t *);
void qla_nvme_delete(scsi_qla_host_t *);
void qla_nvme_abort(struct qla_hw_data *, srb_t *sp);
void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *, struct req_que *);

[...]

> +#if (IS_ENABLED(CONFIG_NVME_FC))
> +int ql2xnvmeenable = 1;
> +#else
> +int ql2xnvmeenable;
> +#endif
> +module_param(ql2xnvmeenable, int, 0644);
> +MODULE_PARM_DESC(ql2xnvmeenable,
> +    "Enables NVME support. "
> +    "0 - no NVMe.  Default is Y");

Default is Y IFF CONFIG_NVME_FC is enabled. Is it possible to guard the whole module
paraneter with IS_ENABLED(CONFIG_NVME_FC)? Not sure if this would break if
CONFIG_NVME_FC=n and someone does qla2xxx.ql2xnvmeenable=N.

[...]

> -	if (sp->type != SRB_NVME_CMD) {
> +	if ((sp->type != SRB_NVME_CMD) && (sp->type != SRB_NVME_LS)) {

http://en.cppreference.com/w/c/language/operator_precedence

> +					if ((sp->type == SRB_NVME_CMD) ||
> +					    (sp->type == SRB_NVME_LS)) {

^^

Thanks,
	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server
  2017-06-21 20:48 ` [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server Madhani, Himanshu
  2017-06-22  6:33   ` Hannes Reinecke
@ 2017-06-22  9:51   ` Johannes Thumshirn
  1 sibling, 0 replies; 22+ messages in thread
From: Johannes Thumshirn @ 2017-06-22  9:51 UTC (permalink / raw)
  To: Madhani, Himanshu
  Cc: martin.petersen, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme

On Wed, Jun 21, 2017 at 01:48:44PM -0700, Madhani, Himanshu wrote:

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration
  2017-06-21 20:48 ` [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration Madhani, Himanshu
  2017-06-22  6:33   ` Hannes Reinecke
@ 2017-06-22  9:52   ` Johannes Thumshirn
  1 sibling, 0 replies; 22+ messages in thread
From: Johannes Thumshirn @ 2017-06-22  9:52 UTC (permalink / raw)
  To: Madhani, Himanshu
  Cc: martin.petersen, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme


Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration
       [not found]     ` <2d07d1fd-545b-0308-8a2b-5cfb59cbcf2b@broadcom.com>
@ 2017-06-22 18:53       ` Johannes Thumshirn
  0 siblings, 0 replies; 22+ messages in thread
From: Johannes Thumshirn @ 2017-06-22 18:53 UTC (permalink / raw)
  To: James Smart
  Cc: Madhani, Himanshu, martin.petersen, linux-scsi, darren.trapp,
	giridhar.malavali, linux-nvme

On Thu, Jun 22, 2017 at 10:48:46AM -0700, James Smart wrote:
> He can't move it. the fcport->nvme_remote_port pointer is set by the
> nvme_fc_register_remoteport() routine (if return status is 0).

Gah, that's kind of wired. Literly _all_ of the Kernel's register_xxx()
funtions have a semantic that after the registration is done the object can be
used and thus assigning private pointer afterwards is an error. Damn I didn't
realize this in the nmve-fc review.

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration
  2017-06-22  9:46   ` Johannes Thumshirn
       [not found]     ` <2d07d1fd-545b-0308-8a2b-5cfb59cbcf2b@broadcom.com>
@ 2017-06-23  3:16     ` Madhani, Himanshu
  2017-06-23  6:28       ` Johannes Thumshirn
  1 sibling, 1 reply; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-23  3:16 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Martin K. Petersen, linux-scsi, Trapp, Darren, Malavali, Giridhar,
	linux-nvme@lists.infradead.org

Hi Johannes, 

> On Jun 22, 2017, at 2:46 AM, Johannes Thumshirn <jthumshirn@suse.de> wrote:
> 
> On Wed, Jun 21, 2017 at 01:48:43PM -0700, Madhani, Himanshu wrote:
> [...]
>> +	wait_queue_head_t nvme_ls_waitQ;
> 
> Can you please lower-case the 'Q' in waitQ IFF you have to re-send the series?

sure.

> 
> [...]
>> +	wait_queue_head_t nvme_waitQ;
> 
> Ditto
> 
Ack

> [...]
>> +	wait_queue_head_t nvme_waitQ;
> 
> And here as well.

Ack

> 
>> diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
>> index 6fbee11c1a18..c6af45f7d5d6 100644
>> --- a/drivers/scsi/qla2xxx/qla_gbl.h
>> +++ b/drivers/scsi/qla2xxx/qla_gbl.h
>> @@ -10,6 +10,16 @@
>> #include <linux/interrupt.h>
>> 
>> /*
>> + * Global functions prototype in qla_nvme.c source file.
>> + */
>> +extern void qla_nvme_register_hba(scsi_qla_host_t *);
>> +extern int  qla_nvme_register_remote(scsi_qla_host_t *, fc_port_t *);
>> +extern void qla_nvme_delete(scsi_qla_host_t *);
>> +extern void qla_nvme_abort(struct qla_hw_data *, srb_t *sp);
>> +extern void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *,
>> +    struct req_que *);
> 
> You're still not convinced of the idea of headers, heh ;-)
> Especially as you have a qla_nvme.h.
> 
> […]
> 

if this is not *must* i’ll like to post patch for this with other patches that I am going to queue up during rc1 phase. 

>> +	INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port);
>> +	rport = kzalloc(sizeof(*rport), GFP_KERNEL);
>> +	if (!rport) {
>> +		ql_log(ql_log_warn, vha, 0x2101,
>> +		    "%s: unable to alloc memory\n", __func__);
> 
> kzalloc() will warn you about a failed allocation, no need to double it.
> See also:
> http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf
> 
> […]

Ack. 

>> +	ret = nvme_fc_register_remoteport(vha->nvme_local_port, &rport->req,
>> +	    &fcport->nvme_remote_port);
>> +	if (ret) {
>> +		ql_log(ql_log_warn, vha, 0x212e,
>> +		    "Failed to register remote port. Transport returned %d\n",
>> +		    ret);
>> +		return ret;
>> +	}
>> +
>> +	fcport->nvme_remote_port->private = fcport;
> 
> I think I already said that in the last review, but can you please move the 
> fcport->nvme_remote_port->private = fcport;
> assingment _above_ the nvme_fc_register_remoteport() call.
> 

I saw your response to James that this is okay for FC NVMe code.

> [...]
> 
>> +	vha = (struct scsi_qla_host *)lport->private;
> 
> No need to cast from void *
>> +	ql_log(ql_log_info, vha, 0x2104,
>> +	    "%s: handle %p, idx =%d, qsize %d\n",
>> +	    __func__, handle, qidx, qsize);
> 
> Btw, sometime in the future you could change your ql_log() thingies to the
> kernel's dyndebug facility.
> 
> […]

Thanks for the suggestions. I’ll bring it up to team and we can slowly convert these to kernel’s
dynamic debugging facility. 


>> +	rval = ha->isp_ops->abort_command(sp);
>> +	if (rval != QLA_SUCCESS)
>> +		ql_log(ql_log_warn, fcport->vha, 0x2125,
>> +		    "%s: failed to abort LS command for SP:%p rval=%x\n",
>> +		    __func__, sp, rval);
>> +
>> +	ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
>> +	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);
> 
> If you insinst in having these two messages ("failed to abort" and "aborted")
> can you at least fold it into one print statement.
> 

I’ll send follow up patch for this cleanup, if its okay with you? 

>> +	rval = ha->isp_ops->abort_command(sp);
>> +	if (!rval)
>> +		ql_log(ql_log_warn, fcport->vha, 0x2127,
>> +		    "%s: failed to abort command for SP:%p rval=%x\n",
>> +		    __func__, sp, rval);
>> +
>> +	ql_dbg(ql_dbg_io, fcport->vha, 0x2126,
>> +	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);
> 
> Ditto.
> 

Agree. Will fold this into cleanup patch. 

> [...]
> 
> 
>> +	/* Setup qpair pointers */
>> +	req = qpair->req;
>> +	tot_dsds = fd->sg_cnt;
>> +
>> +	/* Acquire qpair specific lock */
>> +	spin_lock_irqsave(&qpair->qp_lock, flags);
>> +
>> +	/* Check for room in outstanding command list. */
>> +	handle = req->current_outstanding_cmd;
> 
> I've just seen this in qla2xxx_start_scsi_mq() and
> qla2xxx_dif_start_scsi_mq() and was about to send you an RFC patch. But
> here it is for completeness in the nvme version as well:
> 
> You save a pointer to the req_que from you qpair and _afterwards_ you grab
> the qp_lock. What prevents someone from changing the request internals
> underneath you?
> 
> Like this:
> 
> CPU0                               CPU1
> req = qpair->req;
>                                 qla2xxx_delete_qpair(vha, qpair);
>                                 `-> ret = qla25xx_delete_req_que(vha, qpair->req);
> spin_lock_irqsave(&qpair->qp_lock, flags);
> handle = req->current_outstanding_cmd;
> 
> Oh and btw, neither qla2xxx_delete_qpair() nor qla25xx_delete_req_que() grab
> the qp_lock.
> 
> I think this is something work re-thinking. Maybe you can identify the blocks
> accessing struct members which need to be touched under a lock and extract
> them into a helper function wich calls lockdep_assert_held(). No must just and
> idea.
> 

This is very valid point you brought up and thanks for the detail review comment. 
from your patch submitted this morning, I’ll like to have our test team run through 
regression testing with these changes and we can incorporate that into NVMe as well
and send a follow up patch to correct this. Would you be okay with that? 

> [...]
>> +
>> +	/* Load data segments */
>> +	for_each_sg(sgl, sg, tot_dsds, i) {
> 
> Do you really need the whole loop under a spin_lock_irqsave()? If the sglist
> has a lot of entries (i.e. becasue we couldn't cluster it) we're in risk to
> trigger a NMI watchdog soft-lockup WARN_ON(). You need to grab the lock when
> accessing req's members but the rest of the loop? This applies to
> qla24xx_build_scsi_iocbs() for SCSI as well.
> 

Since these changes would need us to do regression testing, I would like to send a follow up 
patch to correct them as a separate patch.

> [...]
> 
>> +	struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle;
> 
> Void pointer cast. Someone really should write a coccinelle script to get rid
> of em.
> 

Will send a follow up patch for cleanup

> [...]
> 
>> +	/* Alloc SRB structure */
>> +	sp = qla2xxx_get_qpair_sp(qpair, fcport, GFP_ATOMIC);
>> +	if (!sp)
>> +		return -EIO;
> 
> __blk_mq_run_hw_queue()
> `-> blk_mq_sched_dispatch_requests()
>    `-> blk_mq_dispatch_rq_list()
>        `-> nvme_fc_queue_rq()
>            `-> nvme_fc_start_fcp_op() 
>                `-> qla_nvme_post_cmd()
> isn't called from an IRQ context and qla2xxx_get_qpair_sp() internally
> uses mempool_alloc(). From mempool_alloc()'s documentation:
> 
> "Note that due to preallocation, this function *never* fails when called from
> process contexts. (it might fail if called from an IRQ context.)"
> mm/mempool.c:306
> 


Will investigate and work on fixing this. 

> [...]
> 
>> +	fcport = (fc_port_t *)rport->private;
> Void cast.
> 
> [...]
>> +	rval = ha->isp_ops->abort_command(sp);
>> +	if (!rval) {
>> +		if (!qla_nvme_wait_on_command(sp))
> 
>        if (!rval && !qla_nvme_wait_on_command(sp))
> 
Ack. Will fold it into cleanup patch if you are okay? 

> [...]
> 
>> +		for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
>> +			sp = req->outstanding_cmds[cnt];
>> +			if ((sp) && ((sp->type == SRB_NVME_CMD) ||
>                            ^ parenthesis
>> +			    (sp->type == SRB_NVME_LS)) &&
>> +				(sp->fcport == fcport)) {
>                                ^ parenthesis
> 
> [...]
> 
>> diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h
> [...]
> 
> void qla_nvme_register_hba(scsi_qla_host_t *);
> int  qla_nvme_register_remote(scsi_qla_host_t *, fc_port_t *);
> void qla_nvme_delete(scsi_qla_host_t *);
> void qla_nvme_abort(struct qla_hw_data *, srb_t *sp);
> void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *, struct req_que *);
> 

I’ll have this as a follow up patch. 

> [...]
> 
>> +#if (IS_ENABLED(CONFIG_NVME_FC))
>> +int ql2xnvmeenable = 1;
>> +#else
>> +int ql2xnvmeenable;
>> +#endif
>> +module_param(ql2xnvmeenable, int, 0644);
>> +MODULE_PARM_DESC(ql2xnvmeenable,
>> +    "Enables NVME support. "
>> +    "0 - no NVMe.  Default is Y");
> 
> Default is Y IFF CONFIG_NVME_FC is enabled. Is it possible to guard the whole module
> paraneter with IS_ENABLED(CONFIG_NVME_FC)? Not sure if this would break if
> CONFIG_NVME_FC=n and someone does qla2xxx.ql2xnvmeenable=N.
> 
> [...]
> 
>> -	if (sp->type != SRB_NVME_CMD) {
>> +	if ((sp->type != SRB_NVME_CMD) && (sp->type != SRB_NVME_LS)) {
> 
> http://en.cppreference.com/w/c/language/operator_precedence
> 

If you agree i’ll like to take these comments for cleanup series that i’ll submit as a follow up 
series. 

>> +					if ((sp->type == SRB_NVME_CMD) ||
>> +					    (sp->type == SRB_NVME_LS)) {
> 
> ^^
> 
> Thanks,
> 	Johannes
> 
> -- 
> Johannes Thumshirn                                          Storage
> jthumshirn@suse.de                                +49 911 74053 689
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Felix Imendörffer, Jane Smithard, Graham Norton
> HRB 21284 (AG Nürnberg)
> Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

Thanks for these details review of this series and valuable input. 

I’ll send follow up series shortly. Let me know if this series is okay as is and
a follow up patches to address concerns by you are okay.

Thanks,
-Himanshu 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration
  2017-06-23  3:16     ` Madhani, Himanshu
@ 2017-06-23  6:28       ` Johannes Thumshirn
  0 siblings, 0 replies; 22+ messages in thread
From: Johannes Thumshirn @ 2017-06-23  6:28 UTC (permalink / raw)
  To: Madhani, Himanshu
  Cc: Martin K. Petersen, linux-scsi, Trapp, Darren, Malavali, Giridhar,
	linux-nvme@lists.infradead.org

On Fri, Jun 23, 2017 at 03:16:09AM +0000, Madhani, Himanshu wrote:
> if this is not *must* i’ll like to post patch for this with other patches that I am going to queue up during rc1 phase. 

Ok.

[...]

> I saw your response to James that this is okay for FC NVMe code.
> 
> > [...]
> > 
> >> +	vha = (struct scsi_qla_host *)lport->private;
> > 
> > No need to cast from void *
> >> +	ql_log(ql_log_info, vha, 0x2104,
> >> +	    "%s: handle %p, idx =%d, qsize %d\n",
> >> +	    __func__, handle, qidx, qsize);
> > 
> > Btw, sometime in the future you could change your ql_log() thingies to the
> > kernel's dyndebug facility.
> > 
> > […]
> 
> Thanks for the suggestions. I’ll bring it up to team and we can slowly convert these to kernel’s
> dynamic debugging facility. 

Thanks a lot.
> 
> 
> >> +	rval = ha->isp_ops->abort_command(sp);
> >> +	if (rval != QLA_SUCCESS)
> >> +		ql_log(ql_log_warn, fcport->vha, 0x2125,
> >> +		    "%s: failed to abort LS command for SP:%p rval=%x\n",
> >> +		    __func__, sp, rval);
> >> +
> >> +	ql_dbg(ql_dbg_io, fcport->vha, 0x212b,
> >> +	    "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport);
> > 
> > If you insinst in having these two messages ("failed to abort" and "aborted")
> > can you at least fold it into one print statement.
> > 
> 
> I’ll send follow up patch for this cleanup, if its okay with you? 

OK

[...]
> > I've just seen this in qla2xxx_start_scsi_mq() and
> > qla2xxx_dif_start_scsi_mq() and was about to send you an RFC patch. But
> > here it is for completeness in the nvme version as well:
> > 
> > You save a pointer to the req_que from you qpair and _afterwards_ you grab
> > the qp_lock. What prevents someone from changing the request internals
> > underneath you?
> > 
> > Like this:
> > 
> > CPU0                               CPU1
> > req = qpair->req;
> >                                 qla2xxx_delete_qpair(vha, qpair);
> >                                 `-> ret = qla25xx_delete_req_que(vha, qpair->req);
> > spin_lock_irqsave(&qpair->qp_lock, flags);
> > handle = req->current_outstanding_cmd;
> > 
> > Oh and btw, neither qla2xxx_delete_qpair() nor qla25xx_delete_req_que() grab
> > the qp_lock.
> > 
> > I think this is something work re-thinking. Maybe you can identify the blocks
> > accessing struct members which need to be touched under a lock and extract
> > them into a helper function wich calls lockdep_assert_held(). No must just and
> > idea.
> > 
> 
> This is very valid point you brought up and thanks for the detail review comment. 
> from your patch submitted this morning, I’ll like to have our test team run through 
> regression testing with these changes and we can incorporate that into NVMe as well
> and send a follow up patch to correct this. Would you be okay with that? 

That patch has a bug and I'll need to respin it, but I'll be sending you a v2
today.

> 
> > [...]
> >> +
> >> +	/* Load data segments */
> >> +	for_each_sg(sgl, sg, tot_dsds, i) {
> > 
> > Do you really need the whole loop under a spin_lock_irqsave()? If the sglist
> > has a lot of entries (i.e. becasue we couldn't cluster it) we're in risk to
> > trigger a NMI watchdog soft-lockup WARN_ON(). You need to grab the lock when
> > accessing req's members but the rest of the loop? This applies to
> > qla24xx_build_scsi_iocbs() for SCSI as well.
> > 
> 
> Since these changes would need us to do regression testing, I would like to send a follow up 
> patch to correct them as a separate patch.

Sure.

> 
> > [...]
> > 
> >> +	struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle;
> > 
> > Void pointer cast. Someone really should write a coccinelle script to get rid
> > of em.
> > 
> 
> Will send a follow up patch for cleanup
> 
> > [...]
> > 
> >> +	/* Alloc SRB structure */
> >> +	sp = qla2xxx_get_qpair_sp(qpair, fcport, GFP_ATOMIC);
> >> +	if (!sp)
> >> +		return -EIO;
> > 
> > __blk_mq_run_hw_queue()
> > `-> blk_mq_sched_dispatch_requests()
> >    `-> blk_mq_dispatch_rq_list()
> >        `-> nvme_fc_queue_rq()
> >            `-> nvme_fc_start_fcp_op() 
> >                `-> qla_nvme_post_cmd()
> > isn't called from an IRQ context and qla2xxx_get_qpair_sp() internally
> > uses mempool_alloc(). From mempool_alloc()'s documentation:
> > 
> > "Note that due to preallocation, this function *never* fails when called from
> > process contexts. (it might fail if called from an IRQ context.)"
> > mm/mempool.c:306
> > 
> 
> 
> Will investigate and work on fixing this. 


I think I did a mistake here, qla2xxx_get_qpair_sp() can fail for other
reasons than OOM. My bad, sorry.

> Thanks for these details review of this series and valuable input. 
> 
> I’ll send follow up series shortly. Let me know if this series is okay as is and
> a follow up patches to address concerns by you are okay.

Thanks a lot,
	Johannes
-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver
  2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
                   ` (5 preceding siblings ...)
  2017-06-21 20:48 ` [PATCH v2 6/6] qla2xxx: Update Driver version to 10.00.00.00-k Madhani, Himanshu
@ 2017-06-28  1:49 ` Martin K. Petersen
  6 siblings, 0 replies; 22+ messages in thread
From: Martin K. Petersen @ 2017-06-28  1:49 UTC (permalink / raw)
  To: Madhani, Himanshu
  Cc: martin.petersen, linux-scsi, darren.trapp, giridhar.malavali,
	linux-nvme


Himanshu,

> This patch series adds NVMe FC fabric support for qla2xxx initiator
> mode driver.

Applied to 4.13/scsi-queue, thanks!

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling
  2017-06-21 20:48 ` [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling Madhani, Himanshu
  2017-06-22  6:28   ` Hannes Reinecke
@ 2017-06-28 21:15   ` James Bottomley
  2017-06-28 21:23     ` Madhani, Himanshu
  1 sibling, 1 reply; 22+ messages in thread
From: James Bottomley @ 2017-06-28 21:15 UTC (permalink / raw)
  To: Madhani, Himanshu, martin.petersen
  Cc: linux-scsi, darren.trapp, giridhar.malavali, linux-nvme

On Wed, 2017-06-21 at 13:48 -0700, Madhani, Himanshu wrote:
> From: Duane Grigsby <duane.grigsby@cavium.com>
> 
> Added logic to change the login process into an optional PRIL
> step for FC-NVMe ports as a separate operation, such that we can
> change type to 0x28 (NVMe).
> 
> Currently, the driver performs the PLOGI/PRLI together as one
> operation, but if the discovered port is an NVMe port then we
> first issue the PLOGI and then we issue the PRLI. Also, the
> fabric discovery logic was changed to mark each discovered FC
> NVMe port, so that we can register them with the FC-NVMe transport
> later.
> 
> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>

I just got a whole load of bounces from this: you've misspelled Anil's
email address (h and t transposed).  It looks like a generic cut and
paste, so could you fix it for next time?

Thanks,

James

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling
  2017-06-28 21:15   ` James Bottomley
@ 2017-06-28 21:23     ` Madhani, Himanshu
  0 siblings, 0 replies; 22+ messages in thread
From: Madhani, Himanshu @ 2017-06-28 21:23 UTC (permalink / raw)
  To: James Bottomley
  Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
	Trapp, Darren, Malavali, Giridhar, linux-nvme@lists.infradead.org

Hi James,

> On Jun 28, 2017, at 2:15 PM, James Bottomley <James.Bottomley@HansenPartnership.com> wrote:
> 
> On Wed, 2017-06-21 at 13:48 -0700, Madhani, Himanshu wrote:
>> From: Duane Grigsby <duane.grigsby@cavium.com>
>> 
>> Added logic to change the login process into an optional PRIL
>> step for FC-NVMe ports as a separate operation, such that we can
>> change type to 0x28 (NVMe).
>> 
>> Currently, the driver performs the PLOGI/PRLI together as one
>> operation, but if the discovered port is an NVMe port then we
>> first issue the PLOGI and then we issue the PRLI. Also, the
>> fabric discovery logic was changed to mark each discovered FC
>> NVMe port, so that we can register them with the FC-NVMe transport
>> later.
>> 
>> Signed-off-by: Darren Trapp <darren.trapp@cavium.com>
>> Signed-off-by: Duane Grigsby <duane.grigsby@cavium.com>
>> Signed-off-by: Anil Gurumurthy <anil.gurumurhty@cavium.com>
> 
> I just got a whole load of bounces from this: you've misspelled Anil's
> email address (h and t transposed).  It looks like a generic cut and
> paste, so could you fix it for next time?
> 
> Thanks,
> 
> James
> 

Sorry about that. Will fix this up for next time.

Thanks,
- Himanshu

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-06-28 21:23 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-06-21 20:48 [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Madhani, Himanshu
2017-06-21 20:48 ` [PATCH v2 1/6] qla2xxx: Add FC-NVMe port discovery and PRLI handling Madhani, Himanshu
2017-06-22  6:28   ` Hannes Reinecke
2017-06-28 21:15   ` James Bottomley
2017-06-28 21:23     ` Madhani, Himanshu
2017-06-21 20:48 ` [PATCH v2 2/6] qla2xxx: Add FC-NVMe command handling Madhani, Himanshu
2017-06-22  6:28   ` Hannes Reinecke
2017-06-21 20:48 ` [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration Madhani, Himanshu
2017-06-22  6:32   ` Hannes Reinecke
2017-06-22  9:46   ` Johannes Thumshirn
     [not found]     ` <2d07d1fd-545b-0308-8a2b-5cfb59cbcf2b@broadcom.com>
2017-06-22 18:53       ` Johannes Thumshirn
2017-06-23  3:16     ` Madhani, Himanshu
2017-06-23  6:28       ` Johannes Thumshirn
2017-06-21 20:48 ` [PATCH v2 4/6] qla2xxx: Send FC4 type NVMe to the management server Madhani, Himanshu
2017-06-22  6:33   ` Hannes Reinecke
2017-06-22  9:51   ` Johannes Thumshirn
2017-06-21 20:48 ` [PATCH v2 5/6] qla2xxx: Use FC-NMVe FC4 type for FDMI registration Madhani, Himanshu
2017-06-22  6:33   ` Hannes Reinecke
2017-06-22  9:52   ` Johannes Thumshirn
2017-06-21 20:48 ` [PATCH v2 6/6] qla2xxx: Update Driver version to 10.00.00.00-k Madhani, Himanshu
2017-06-22  6:33   ` Hannes Reinecke
2017-06-28  1:49 ` [PATCH v2 0/6] qla2xxx: Add NVMe FC Fabric support in driver Martin K. Petersen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).