linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/14] target: Allow backends to operate independent of se_cmd
@ 2016-06-01 21:48 Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 01/14] target: Fix for hang of Ordered task in TCM Nicholas A. Bellinger
                   ` (13 more replies)
  0 siblings, 14 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

Hi Jens, HCH & Co,

This series introduces target_iostate and target_iomem descriptors
that abstract what existing target backend drivers require in order
to process I/O, sync_cache, write_same and unmap via sbc_ops.

The purpose is to allow existing target backend drivers from within
/sys/kernel/config/target/core/ to be accessed externally outside
of the existing /sys/kernel/config/target/$FABRIC/ configfs layout,
to operate independently of se_cmd and SCSI specific dependencies.

Namely, it's intended for the upcoming nvme-target code to utilize
existing target-core backend drivers and T10-PI logic, without
requiring consumers to be under /sys/kernel/config/target/$FABRIC/
configfs layout.

Also included is a prerequisite bug-fix for target-core, and IBLOCK
optimization for eliminating the internal memory allocation.

Beyond that, it's predominantly mechanical changes.

Please review,

--nab

Nicholas Bellinger (14):
  target: Fix for hang of Ordered task in TCM
  target: Add target_iomem descriptor
  target: Add target_iostate descriptor
  target: Add target_complete_ios wrapper
  target: Setup target_iostate memory in __target_execute_cmd
  target: Convert se_cmd->execute_cmd to target_iostate
  target/sbc: Convert sbc_ops->execute_rw to target_iostate
  target/sbc: Convert sbc_dif_copy_prot to target_iostate
  target/file: Convert sbc_dif_verify to target_iostate
  target/iblock: Fold iblock_req into target_iostate
  target/sbc: Convert sbc_ops->execute_sync_cache to target_iostate
  target/sbc: Convert sbc_ops->execute_write_same to target_iostate
  target/sbc: Convert sbc_ops->execute_unmap to target_iostate
  target: Make sbc_ops accessable via target_backend_ops

 drivers/infiniband/ulp/isert/ib_isert.c           |  61 ++---
 drivers/infiniband/ulp/srpt/ib_srpt.c             |   6 +-
 drivers/scsi/qla2xxx/qla_target.c                 |  64 ++---
 drivers/scsi/qla2xxx/tcm_qla2xxx.c                |  29 +--
 drivers/target/iscsi/cxgbit/cxgbit_ddp.c          |   8 +-
 drivers/target/iscsi/cxgbit/cxgbit_target.c       |  20 +-
 drivers/target/iscsi/iscsi_target.c               |  26 +-
 drivers/target/iscsi/iscsi_target_datain_values.c |  18 +-
 drivers/target/iscsi/iscsi_target_erl0.c          |  24 +-
 drivers/target/iscsi/iscsi_target_erl1.c          |   8 +-
 drivers/target/iscsi/iscsi_target_seq_pdu_list.c  |  40 ++--
 drivers/target/iscsi/iscsi_target_tmr.c           |   4 +-
 drivers/target/iscsi/iscsi_target_util.c          |   4 +-
 drivers/target/loopback/tcm_loop.c                |   2 +-
 drivers/target/sbp/sbp_target.c                   |   8 +-
 drivers/target/target_core_alua.c                 |  43 ++--
 drivers/target/target_core_alua.h                 |   6 +-
 drivers/target/target_core_device.c               |  24 +-
 drivers/target/target_core_file.c                 | 142 +++++------
 drivers/target/target_core_iblock.c               | 166 ++++++-------
 drivers/target/target_core_iblock.h               |   5 -
 drivers/target/target_core_internal.h             |   1 +
 drivers/target/target_core_pr.c                   |  68 +++---
 drivers/target/target_core_pr.h                   |   8 +-
 drivers/target/target_core_pscsi.c                |  26 +-
 drivers/target/target_core_rd.c                   |  44 ++--
 drivers/target/target_core_sbc.c                  | 278 ++++++++++++----------
 drivers/target/target_core_spc.c                  |  47 ++--
 drivers/target/target_core_transport.c            | 272 ++++++++++++---------
 drivers/target/target_core_user.c                 |  41 ++--
 drivers/target/target_core_xcopy.c                |  33 +--
 drivers/target/target_core_xcopy.h                |   4 +-
 drivers/target/tcm_fc/tfc_cmd.c                   |  14 +-
 drivers/target/tcm_fc/tfc_io.c                    |  21 +-
 drivers/usb/gadget/function/f_tcm.c               |  50 ++--
 drivers/vhost/scsi.c                              |   2 +-
 include/target/target_core_backend.h              |  30 ++-
 include/target/target_core_base.h                 |  72 ++++--
 include/target/target_core_fabric.h               |   3 +-
 include/trace/events/target.h                     |   4 +-
 40 files changed, 931 insertions(+), 795 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 01/14] target: Fix for hang of Ordered task in TCM
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 02/14] target: Add target_iomem descriptor Nicholas A. Bellinger
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

If a command with a Simple task attribute is failed due to a Unit
Attention, then a subsequent command with an Ordered task attribute
will hang forever.  The reason for this is that the Unit Attention
status is checked for in target_setup_cmd_from_cdb, before the call
to target_execute_cmd, which calls target_handle_task_attr, which
in turn increments dev->simple_cmds.

However, transport_generic_request_failure still calls
transport_complete_task_attr, which will decrement dev->simple_cmds.
In this case, simple_cmds is now -1.  So when a command with the
Ordered task attribute is sent, target_handle_task_attr sees that
dev->simple_cmds is not 0, so it decides it can't execute the
command until all the (nonexistent) Simple commands have completed.

Reported-by: Michael Cyr <mikecyr@linux.vnet.ibm.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_internal.h  |  1 +
 drivers/target/target_core_sbc.c       |  2 +-
 drivers/target/target_core_transport.c | 45 ++++++++++++++++++++++++++--------
 include/target/target_core_fabric.h    |  1 -
 4 files changed, 37 insertions(+), 12 deletions(-)

diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
index fc91e85..e2c970a 100644
--- a/drivers/target/target_core_internal.h
+++ b/drivers/target/target_core_internal.h
@@ -146,6 +146,7 @@ sense_reason_t	target_cmd_size_check(struct se_cmd *cmd, unsigned int size);
 void	target_qf_do_work(struct work_struct *work);
 bool	target_check_wce(struct se_device *dev);
 bool	target_check_fua(struct se_device *dev);
+void	__target_execute_cmd(struct se_cmd *, bool);
 
 /* target_core_stat.c */
 void	target_stat_setup_dev_default_groups(struct se_device *);
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index a9057aa..04f616b 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -602,7 +602,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	cmd->transport_state |= CMD_T_ACTIVE|CMD_T_BUSY|CMD_T_SENT;
 	spin_unlock_irq(&cmd->t_state_lock);
 
-	__target_execute_cmd(cmd);
+	__target_execute_cmd(cmd, false);
 
 	kfree(buf);
 	return ret;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 5ab3967..614ef3f 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1761,20 +1761,45 @@ queue_full:
 }
 EXPORT_SYMBOL(transport_generic_request_failure);
 
-void __target_execute_cmd(struct se_cmd *cmd)
+void __target_execute_cmd(struct se_cmd *cmd, bool do_checks)
 {
 	sense_reason_t ret;
 
-	if (cmd->execute_cmd) {
-		ret = cmd->execute_cmd(cmd);
-		if (ret) {
-			spin_lock_irq(&cmd->t_state_lock);
-			cmd->transport_state &= ~(CMD_T_BUSY|CMD_T_SENT);
-			spin_unlock_irq(&cmd->t_state_lock);
+	if (!cmd->execute_cmd) {
+		ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+		goto err;
+	}
+	if (do_checks) {
+		/*
+		 * Check for an existing UNIT ATTENTION condition after
+		 * target_handle_task_attr() has done SAM task attr
+		 * checking, and possibly have already defered execution
+		 * out to target_restart_delayed_cmds() context.
+		 */
+		ret = target_scsi3_ua_check(cmd);
+		if (ret)
+			goto err;
 
-			transport_generic_request_failure(cmd, ret);
+		ret = target_alua_state_check(cmd);
+		if (ret)
+			goto err;
+
+		ret = target_check_reservation(cmd);
+		if (ret) {
+			cmd->scsi_status = SAM_STAT_RESERVATION_CONFLICT;
+			goto err;
 		}
 	}
+
+	ret = cmd->execute_cmd(cmd);
+	if (!ret)
+		return;
+err:
+	spin_lock_irq(&cmd->t_state_lock);
+	cmd->transport_state &= ~(CMD_T_BUSY|CMD_T_SENT);
+	spin_unlock_irq(&cmd->t_state_lock);
+
+	transport_generic_request_failure(cmd, ret);
 }
 
 static int target_write_prot_action(struct se_cmd *cmd)
@@ -1899,7 +1924,7 @@ void target_execute_cmd(struct se_cmd *cmd)
 		return;
 	}
 
-	__target_execute_cmd(cmd);
+	__target_execute_cmd(cmd, true);
 }
 EXPORT_SYMBOL(target_execute_cmd);
 
@@ -1923,7 +1948,7 @@ static void target_restart_delayed_cmds(struct se_device *dev)
 		list_del(&cmd->se_delayed_node);
 		spin_unlock(&dev->delayed_cmd_lock);
 
-		__target_execute_cmd(cmd);
+		__target_execute_cmd(cmd, true);
 
 		if (cmd->sam_task_attr == TCM_ORDERED_TAG)
 			break;
diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h
index de44462..5cd6faa 100644
--- a/include/target/target_core_fabric.h
+++ b/include/target/target_core_fabric.h
@@ -163,7 +163,6 @@ int	core_tmr_alloc_req(struct se_cmd *, void *, u8, gfp_t);
 void	core_tmr_release_req(struct se_tmr_req *);
 int	transport_generic_handle_tmr(struct se_cmd *);
 void	transport_generic_request_failure(struct se_cmd *, sense_reason_t);
-void	__target_execute_cmd(struct se_cmd *);
 int	transport_lookup_tmr_lun(struct se_cmd *, u64);
 void	core_allocate_nexus_loss_ua(struct se_node_acl *acl);
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 02/14] target: Add target_iomem descriptor
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 01/14] target: Fix for hang of Ordered task in TCM Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 03/14] target: Add target_iostate descriptor Nicholas A. Bellinger
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch introduces a new struct target_iomem descriptor
containing scatterlist memory + scatterlist counts within
existing struct se_cmd.

This includes:

    - t_data_* // Used to store READ/WRITE payloads
    - t_data_*_orig // Used to store COMPARE_AND_WRITE payload
    - t_data_vmap // Used to map payload for CONTROL CDB emulation
    - t_bidi_data_* // Used for bidirectional READ payload
    - t_prot_* // Used for T10-PI payload

It also includes the associated mechanical conversion tree-wide
within target backend and fabric driver code.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/infiniband/ulp/isert/ib_isert.c     | 24 +++----
 drivers/scsi/qla2xxx/qla_target.c           |  4 +-
 drivers/scsi/qla2xxx/tcm_qla2xxx.c          | 19 +++---
 drivers/target/iscsi/cxgbit/cxgbit_ddp.c    |  4 +-
 drivers/target/iscsi/cxgbit/cxgbit_target.c | 10 +--
 drivers/target/iscsi/iscsi_target.c         |  4 +-
 drivers/target/sbp/sbp_target.c             |  4 +-
 drivers/target/target_core_file.c           | 24 +++----
 drivers/target/target_core_iblock.c         | 15 +++--
 drivers/target/target_core_pscsi.c          |  4 +-
 drivers/target/target_core_rd.c             |  2 +-
 drivers/target/target_core_sbc.c            | 49 +++++++-------
 drivers/target/target_core_transport.c      | 99 +++++++++++++++--------------
 drivers/target/target_core_user.c           | 21 +++---
 drivers/target/target_core_xcopy.c          | 17 ++---
 drivers/target/tcm_fc/tfc_cmd.c             |  8 +--
 drivers/target/tcm_fc/tfc_io.c              | 11 ++--
 drivers/usb/gadget/function/f_tcm.c         | 28 ++++----
 include/target/target_core_base.h           | 28 +++++---
 19 files changed, 199 insertions(+), 176 deletions(-)

diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index a990c04..2c41a8b 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -1128,14 +1128,14 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
 
 	if (imm_data_len != data_len) {
 		sg_nents = max(1UL, DIV_ROUND_UP(imm_data_len, PAGE_SIZE));
-		sg_copy_from_buffer(cmd->se_cmd.t_data_sg, sg_nents,
+		sg_copy_from_buffer(cmd->se_cmd.t_iomem.t_data_sg, sg_nents,
 				    &rx_desc->data[0], imm_data_len);
 		isert_dbg("Copy Immediate sg_nents: %u imm_data_len: %d\n",
 			  sg_nents, imm_data_len);
 	} else {
 		sg_init_table(&isert_cmd->sg, 1);
-		cmd->se_cmd.t_data_sg = &isert_cmd->sg;
-		cmd->se_cmd.t_data_nents = 1;
+		cmd->se_cmd.t_iomem.t_data_sg = &isert_cmd->sg;
+		cmd->se_cmd.t_iomem.t_data_nents = 1;
 		sg_set_buf(&isert_cmd->sg, &rx_desc->data[0], imm_data_len);
 		isert_dbg("Transfer Immediate imm_data_len: %d\n",
 			  imm_data_len);
@@ -1192,7 +1192,7 @@ isert_handle_iscsi_dataout(struct isert_conn *isert_conn,
 		  cmd->se_cmd.data_length);
 
 	sg_off = cmd->write_data_done / PAGE_SIZE;
-	sg_start = &cmd->se_cmd.t_data_sg[sg_off];
+	sg_start = &cmd->se_cmd.t_iomem.t_data_sg[sg_off];
 	sg_nents = max(1UL, DIV_ROUND_UP(unsol_data_len, PAGE_SIZE));
 	page_off = cmd->write_data_done % PAGE_SIZE;
 	/*
@@ -1463,6 +1463,7 @@ static void
 isert_rdma_rw_ctx_destroy(struct isert_cmd *cmd, struct isert_conn *conn)
 {
 	struct se_cmd *se_cmd = &cmd->iscsi_cmd->se_cmd;
+	struct target_iomem *iomem = &se_cmd->t_iomem;
 	enum dma_data_direction dir = target_reverse_dma_direction(se_cmd);
 
 	if (!cmd->rw.nr_ops)
@@ -1470,12 +1471,12 @@ isert_rdma_rw_ctx_destroy(struct isert_cmd *cmd, struct isert_conn *conn)
 
 	if (isert_prot_cmd(conn, se_cmd)) {
 		rdma_rw_ctx_destroy_signature(&cmd->rw, conn->qp,
-				conn->cm_id->port_num, se_cmd->t_data_sg,
-				se_cmd->t_data_nents, se_cmd->t_prot_sg,
-				se_cmd->t_prot_nents, dir);
+				conn->cm_id->port_num, iomem->t_data_sg,
+				iomem->t_data_nents, iomem->t_prot_sg,
+				iomem->t_prot_nents, dir);
 	} else {
 		rdma_rw_ctx_destroy(&cmd->rw, conn->qp, conn->cm_id->port_num,
-				se_cmd->t_data_sg, se_cmd->t_data_nents, dir);
+				iomem->t_data_sg, iomem->t_data_nents, dir);
 	}
 
 	cmd->rw.nr_ops = 0;
@@ -2076,6 +2077,7 @@ isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
 		struct ib_cqe *cqe, struct ib_send_wr *chain_wr)
 {
 	struct se_cmd *se_cmd = &cmd->iscsi_cmd->se_cmd;
+	struct target_iomem *iomem = &se_cmd->t_iomem;
 	enum dma_data_direction dir = target_reverse_dma_direction(se_cmd);
 	u8 port_num = conn->cm_id->port_num;
 	u64 addr;
@@ -2101,12 +2103,12 @@ isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
 
 		WARN_ON_ONCE(offset);
 		ret = rdma_rw_ctx_signature_init(&cmd->rw, conn->qp, port_num,
-				se_cmd->t_data_sg, se_cmd->t_data_nents,
-				se_cmd->t_prot_sg, se_cmd->t_prot_nents,
+				iomem->t_data_sg, iomem->t_data_nents,
+				iomem->t_prot_sg, iomem->t_prot_nents,
 				&sig_attrs, addr, rkey, dir);
 	} else {
 		ret = rdma_rw_ctx_init(&cmd->rw, conn->qp, port_num,
-				se_cmd->t_data_sg, se_cmd->t_data_nents,
+				iomem->t_data_sg, iomem->t_data_nents,
 				offset, addr, rkey, dir);
 	}
 	if (ret < 0) {
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index ca39deb..f93bd5f 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -4903,8 +4903,8 @@ restart:
 		}
 		se_cmd = &cmd->se_cmd;
 
-		cmd->sg_cnt = se_cmd->t_data_nents;
-		cmd->sg = se_cmd->t_data_sg;
+		cmd->sg_cnt = se_cmd->t_iomem.t_data_nents;
+		cmd->sg = se_cmd->t_iomem.t_data_sg;
 
 		ql_dbg(ql_dbg_tgt_mgt, vha, 0xf02c,
 		       "SRR cmd %p (se_cmd %p, tag %lld, op %x), sg_cnt=%d, offset=%d",
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index 6643f6f..dace993 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -381,11 +381,11 @@ static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd)
 	cmd->bufflen = se_cmd->data_length;
 	cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
 
-	cmd->sg_cnt = se_cmd->t_data_nents;
-	cmd->sg = se_cmd->t_data_sg;
+	cmd->sg_cnt = se_cmd->t_iomem.t_data_nents;
+	cmd->sg = se_cmd->t_iomem.t_data_sg;
 
-	cmd->prot_sg_cnt = se_cmd->t_prot_nents;
-	cmd->prot_sg = se_cmd->t_prot_sg;
+	cmd->prot_sg_cnt = se_cmd->t_iomem.t_prot_nents;
+	cmd->prot_sg = se_cmd->t_iomem.t_prot_sg;
 	cmd->blk_sz  = se_cmd->se_dev->dev_attrib.block_size;
 	se_cmd->pi_err = 0;
 
@@ -595,12 +595,12 @@ static int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
 	cmd->bufflen = se_cmd->data_length;
 	cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
 
-	cmd->sg_cnt = se_cmd->t_data_nents;
-	cmd->sg = se_cmd->t_data_sg;
+	cmd->sg_cnt = se_cmd->t_iomem.t_data_nents;
+	cmd->sg = se_cmd->t_iomem.t_data_sg;
 	cmd->offset = 0;
 
-	cmd->prot_sg_cnt = se_cmd->t_prot_nents;
-	cmd->prot_sg = se_cmd->t_prot_sg;
+	cmd->prot_sg_cnt = se_cmd->t_iomem.t_prot_nents;
+	cmd->prot_sg = se_cmd->t_iomem.t_prot_sg;
 	cmd->blk_sz  = se_cmd->se_dev->dev_attrib.block_size;
 	se_cmd->pi_err = 0;
 
@@ -1817,7 +1817,8 @@ static const struct target_core_fabric_ops tcm_qla2xxx_ops = {
 	.node_acl_size			= sizeof(struct tcm_qla2xxx_nacl),
 	/*
 	 * XXX: Limit assumes single page per scatter-gather-list entry.
-	 * Current maximum is ~4.9 MB per se_cmd->t_data_sg with PAGE_SIZE=4096
+	 * Current maximum is ~4.9 MB per se_cmd->t_iomem.t_data_sg with
+	 * PAGE_SIZE=4096
 	 */
 	.max_data_sg_nents		= 1200,
 	.get_fabric_name		= tcm_qla2xxx_get_fabric_name,
diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
index 5d78bdb..a0c94b4 100644
--- a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
@@ -245,8 +245,8 @@ cxgbit_get_r2t_ttt(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
 
 	ccmd->setup_ddp = false;
 
-	ttinfo->sgl = cmd->se_cmd.t_data_sg;
-	ttinfo->nents = cmd->se_cmd.t_data_nents;
+	ttinfo->sgl = cmd->se_cmd.t_iomem.t_data_sg;
+	ttinfo->nents = cmd->se_cmd.t_iomem.t_data_nents;
 
 	ret = cxgbit_ddp_reserve(csk, ttinfo, cmd->se_cmd.data_length);
 	if (ret < 0) {
diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
index d02bf58..ac86574 100644
--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c
+++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
@@ -368,7 +368,7 @@ cxgbit_map_skb(struct iscsi_cmd *cmd, struct sk_buff *skb, u32 data_offset,
 	/*
 	 * We know each entry in t_data_sg contains a page.
 	 */
-	sg = &cmd->se_cmd.t_data_sg[data_offset / PAGE_SIZE];
+	sg = &cmd->se_cmd.t_iomem.t_data_sg[data_offset / PAGE_SIZE];
 	page_off = (data_offset % PAGE_SIZE);
 
 	while (data_length && (i < nr_frags)) {
@@ -864,12 +864,12 @@ cxgbit_handle_immediate_data(struct iscsi_cmd *cmd, struct iscsi_scsi_req *hdr,
 			    dfrag->page_offset);
 		get_page(dfrag->page.p);
 
-		cmd->se_cmd.t_data_sg = &ccmd->sg;
-		cmd->se_cmd.t_data_nents = 1;
+		cmd->se_cmd.t_iomem.t_data_sg = &ccmd->sg;
+		cmd->se_cmd.t_iomem.t_data_nents = 1;
 
 		ccmd->release = true;
 	} else {
-		struct scatterlist *sg = &cmd->se_cmd.t_data_sg[0];
+		struct scatterlist *sg = &cmd->se_cmd.t_iomem.t_data_sg[0];
 		u32 sg_nents = max(1UL, DIV_ROUND_UP(pdu_cb->dlen, PAGE_SIZE));
 
 		cxgbit_skb_copy_to_sg(csk->skb, sg, sg_nents);
@@ -1005,7 +1005,7 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
 
 	if (!(pdu_cb->flags & PDUCBF_RX_DATA_DDPD)) {
 		sg_off = data_offset / PAGE_SIZE;
-		sg_start = &cmd->se_cmd.t_data_sg[sg_off];
+		sg_start = &cmd->se_cmd.t_iomem.t_data_sg[sg_off];
 		sg_nents = max(1UL, DIV_ROUND_UP(data_len, PAGE_SIZE));
 
 		cxgbit_skb_copy_to_sg(csk->skb, sg_start, sg_nents);
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 50f3d3a..44388e3 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -907,12 +907,12 @@ static int iscsit_map_iovec(
 	 */
 	u32 ent = data_offset / PAGE_SIZE;
 
-	if (ent >= cmd->se_cmd.t_data_nents) {
+	if (ent >= cmd->se_cmd.t_iomem.t_data_nents) {
 		pr_err("Initial page entry out-of-bounds\n");
 		return -1;
 	}
 
-	sg = &cmd->se_cmd.t_data_sg[ent];
+	sg = &cmd->se_cmd.t_iomem.t_data_sg[ent];
 	page_off = (data_offset % PAGE_SIZE);
 
 	cmd->first_data_sg = sg;
diff --git a/drivers/target/sbp/sbp_target.c b/drivers/target/sbp/sbp_target.c
index 58bb6ed..dcc6eba 100644
--- a/drivers/target/sbp/sbp_target.c
+++ b/drivers/target/sbp/sbp_target.c
@@ -1299,8 +1299,8 @@ static int sbp_rw_data(struct sbp_target_request *req)
 		length = req->se_cmd.data_length;
 	}
 
-	sg_miter_start(&iter, req->se_cmd.t_data_sg, req->se_cmd.t_data_nents,
-		sg_miter_flags);
+	sg_miter_start(&iter, req->se_cmd.t_iomem.t_data_sg,
+		       req->se_cmd.t_iomem.t_data_nents, sg_miter_flags);
 
 	while (length || num_pte) {
 		if (!length) {
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 75f0f08..033c6a8 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -375,12 +375,12 @@ fd_execute_write_same(struct se_cmd *cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 
-	if (cmd->t_data_nents > 1 ||
-	    cmd->t_data_sg[0].length != cmd->se_dev->dev_attrib.block_size) {
+	if (cmd->t_iomem.t_data_nents > 1 ||
+	    cmd->t_iomem.t_data_sg[0].length != cmd->se_dev->dev_attrib.block_size) {
 		pr_err("WRITE_SAME: Illegal SGL t_data_nents: %u length: %u"
 			" block_size: %u\n",
-			cmd->t_data_nents,
-			cmd->t_data_sg[0].length,
+			cmd->t_iomem.t_data_nents,
+			cmd->t_iomem.t_data_sg[0].length,
 			cmd->se_dev->dev_attrib.block_size);
 		return TCM_INVALID_CDB_FIELD;
 	}
@@ -390,9 +390,9 @@ fd_execute_write_same(struct se_cmd *cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
 	for (i = 0; i < nolb; i++) {
-		bvec[i].bv_page = sg_page(&cmd->t_data_sg[0]);
-		bvec[i].bv_len = cmd->t_data_sg[0].length;
-		bvec[i].bv_offset = cmd->t_data_sg[0].offset;
+		bvec[i].bv_page = sg_page(&cmd->t_iomem.t_data_sg[0]);
+		bvec[i].bv_len = cmd->t_iomem.t_data_sg[0].length;
+		bvec[i].bv_offset = cmd->t_iomem.t_data_sg[0].offset;
 
 		len += se_dev->dev_attrib.block_size;
 	}
@@ -534,7 +534,8 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	if (data_direction == DMA_FROM_DEVICE) {
 		if (cmd->prot_type && dev->dev_attrib.pi_prot_type) {
 			ret = fd_do_rw(cmd, pfile, dev->prot_length,
-				       cmd->t_prot_sg, cmd->t_prot_nents,
+				       cmd->t_iomem.t_prot_sg,
+				       cmd->t_iomem.t_prot_nents,
 				       cmd->prot_length, 0);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -548,7 +549,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 					ilog2(dev->dev_attrib.block_size);
 
 			rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors,
-					    0, cmd->t_prot_sg, 0);
+					    0, cmd->t_iomem.t_prot_sg, 0);
 			if (rc)
 				return rc;
 		}
@@ -558,7 +559,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 					ilog2(dev->dev_attrib.block_size);
 
 			rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors,
-					    0, cmd->t_prot_sg, 0);
+					    0, cmd->t_iomem.t_prot_sg, 0);
 			if (rc)
 				return rc;
 		}
@@ -585,7 +586,8 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 
 		if (ret > 0 && cmd->prot_type && dev->dev_attrib.pi_prot_type) {
 			ret = fd_do_rw(cmd, pfile, dev->prot_length,
-				       cmd->t_prot_sg, cmd->t_prot_nents,
+				       cmd->t_iomem.t_prot_sg,
+				       cmd->t_iomem.t_prot_nents,
 				       cmd->prot_length, 1);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 7c4efb4..80ad456 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -416,7 +416,7 @@ static sense_reason_t
 iblock_execute_write_same_direct(struct block_device *bdev, struct se_cmd *cmd)
 {
 	struct se_device *dev = cmd->se_dev;
-	struct scatterlist *sg = &cmd->t_data_sg[0];
+	struct scatterlist *sg = &cmd->t_iomem.t_data_sg[0];
 	struct page *page = NULL;
 	int ret;
 
@@ -424,7 +424,8 @@ iblock_execute_write_same_direct(struct block_device *bdev, struct se_cmd *cmd)
 		page = alloc_page(GFP_KERNEL);
 		if (!page)
 			return TCM_OUT_OF_RESOURCES;
-		sg_copy_to_buffer(sg, cmd->t_data_nents, page_address(page),
+		sg_copy_to_buffer(sg, cmd->t_iomem.t_data_nents,
+				  page_address(page),
 				  dev->dev_attrib.block_size);
 	}
 
@@ -460,12 +461,12 @@ iblock_execute_write_same(struct se_cmd *cmd)
 		       " backends not supported\n");
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
-	sg = &cmd->t_data_sg[0];
+	sg = &cmd->t_iomem.t_data_sg[0];
 
-	if (cmd->t_data_nents > 1 ||
+	if (cmd->t_iomem.t_data_nents > 1 ||
 	    sg->length != cmd->se_dev->dev_attrib.block_size) {
 		pr_err("WRITE_SAME: Illegal SGL t_data_nents: %u length: %u"
-			" block_size: %u\n", cmd->t_data_nents, sg->length,
+			" block_size: %u\n", cmd->t_iomem.t_data_nents, sg->length,
 			cmd->se_dev->dev_attrib.block_size);
 		return TCM_INVALID_CDB_FIELD;
 	}
@@ -636,7 +637,7 @@ iblock_alloc_bip(struct se_cmd *cmd, struct bio *bio)
 		return -ENODEV;
 	}
 
-	bip = bio_integrity_alloc(bio, GFP_NOIO, cmd->t_prot_nents);
+	bip = bio_integrity_alloc(bio, GFP_NOIO, cmd->t_iomem.t_prot_nents);
 	if (IS_ERR(bip)) {
 		pr_err("Unable to allocate bio_integrity_payload\n");
 		return PTR_ERR(bip);
@@ -649,7 +650,7 @@ iblock_alloc_bip(struct se_cmd *cmd, struct bio *bio)
 	pr_debug("IBLOCK BIP Size: %u Sector: %llu\n", bip->bip_iter.bi_size,
 		 (unsigned long long)bip->bip_iter.bi_sector);
 
-	for_each_sg(cmd->t_prot_sg, sg, cmd->t_prot_nents, i) {
+	for_each_sg(cmd->t_iomem.t_prot_sg, sg, cmd->t_iomem.t_prot_nents, i) {
 
 		rc = bio_integrity_add_page(bio, sg_page(sg), sg->length,
 					    sg->offset);
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index de18790..75041dd 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -990,8 +990,8 @@ pscsi_parse_cdb(struct se_cmd *cmd)
 static sense_reason_t
 pscsi_execute_cmd(struct se_cmd *cmd)
 {
-	struct scatterlist *sgl = cmd->t_data_sg;
-	u32 sgl_nents = cmd->t_data_nents;
+	struct scatterlist *sgl = cmd->t_iomem.t_data_sg;
+	u32 sgl_nents = cmd->t_iomem.t_data_nents;
 	enum dma_data_direction data_direction = cmd->data_direction;
 	struct pscsi_dev_virt *pdv = PSCSI_DEV(cmd->se_dev);
 	struct pscsi_plugin_task *pt;
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index 24b36fd..d840281 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -426,7 +426,7 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
 				    prot_sg, prot_offset);
 	else
 		rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors, 0,
-				    cmd->t_prot_sg, 0);
+				    cmd->t_iomem.t_prot_sg, 0);
 
 	if (!rc)
 		sbc_dif_copy_prot(cmd, sectors, is_read, prot_sg, prot_offset);
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 04f616b..c80a225 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -404,11 +404,11 @@ static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,
 		return TCM_OUT_OF_RESOURCES;
 	}
 	/*
-	 * Copy the scatterlist WRITE buffer located at cmd->t_data_sg
+	 * Copy the scatterlist WRITE buffer located at cmd->t_iomem.t_data_sg
 	 * into the locally allocated *buf
 	 */
-	sg_copy_to_buffer(cmd->t_data_sg,
-			  cmd->t_data_nents,
+	sg_copy_to_buffer(cmd->t_iomem.t_data_sg,
+			  cmd->t_iomem.t_data_nents,
 			  buf,
 			  cmd->data_length);
 
@@ -418,7 +418,8 @@ static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,
 	 */
 
 	offset = 0;
-	for_each_sg(cmd->t_bidi_data_sg, sg, cmd->t_bidi_data_nents, count) {
+	for_each_sg(cmd->t_iomem.t_bidi_data_sg, sg,
+		    cmd->t_iomem.t_bidi_data_nents, count) {
 		addr = kmap_atomic(sg_page(sg));
 		if (!addr) {
 			ret = TCM_OUT_OF_RESOURCES;
@@ -442,7 +443,7 @@ sbc_execute_rw(struct se_cmd *cmd)
 {
 	struct sbc_ops *ops = cmd->protocol_data;
 
-	return ops->execute_rw(cmd, cmd->t_data_sg, cmd->t_data_nents,
+	return ops->execute_rw(cmd, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
 			       cmd->data_direction);
 }
 
@@ -490,7 +491,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	 * Handle early failure in transport_generic_request_failure(),
 	 * which will not have taken ->caw_sem yet..
 	 */
-	if (!success && (!cmd->t_data_sg || !cmd->t_bidi_data_sg))
+	if (!success && (!cmd->t_iomem.t_data_sg || !cmd->t_iomem.t_bidi_data_sg))
 		return TCM_NO_SENSE;
 	/*
 	 * Handle special case for zero-length COMPARE_AND_WRITE
@@ -514,19 +515,19 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 		goto out;
 	}
 
-	write_sg = kmalloc(sizeof(struct scatterlist) * cmd->t_data_nents,
+	write_sg = kmalloc(sizeof(struct scatterlist) * cmd->t_iomem.t_data_nents,
 			   GFP_KERNEL);
 	if (!write_sg) {
 		pr_err("Unable to allocate compare_and_write sg\n");
 		ret = TCM_OUT_OF_RESOURCES;
 		goto out;
 	}
-	sg_init_table(write_sg, cmd->t_data_nents);
+	sg_init_table(write_sg, cmd->t_iomem.t_data_nents);
 	/*
 	 * Setup verify and write data payloads from total NumberLBAs.
 	 */
-	rc = sg_copy_to_buffer(cmd->t_data_sg, cmd->t_data_nents, buf,
-			       cmd->data_length);
+	rc = sg_copy_to_buffer(cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
+			       buf, cmd->data_length);
 	if (!rc) {
 		pr_err("sg_copy_to_buffer() failed for compare_and_write\n");
 		ret = TCM_OUT_OF_RESOURCES;
@@ -535,7 +536,8 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	/*
 	 * Compare against SCSI READ payload against verify payload
 	 */
-	for_each_sg(cmd->t_bidi_data_sg, sg, cmd->t_bidi_data_nents, i) {
+	for_each_sg(cmd->t_iomem.t_bidi_data_sg, sg,
+		    cmd->t_iomem.t_bidi_data_nents, i) {
 		addr = (unsigned char *)kmap_atomic(sg_page(sg));
 		if (!addr) {
 			ret = TCM_OUT_OF_RESOURCES;
@@ -560,7 +562,8 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 
 	i = 0;
 	len = cmd->t_task_nolb * block_size;
-	sg_miter_start(&m, cmd->t_data_sg, cmd->t_data_nents, SG_MITER_TO_SG);
+	sg_miter_start(&m, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
+		       SG_MITER_TO_SG);
 	/*
 	 * Currently assumes NoLB=1 and SGLs are PAGE_SIZE..
 	 */
@@ -584,10 +587,10 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	 * assignments, to be released in transport_free_pages() ->
 	 * transport_reset_sgl_orig()
 	 */
-	cmd->t_data_sg_orig = cmd->t_data_sg;
-	cmd->t_data_sg = write_sg;
-	cmd->t_data_nents_orig = cmd->t_data_nents;
-	cmd->t_data_nents = 1;
+	cmd->t_iomem.t_data_sg_orig = cmd->t_iomem.t_data_sg;
+	cmd->t_iomem.t_data_sg = write_sg;
+	cmd->t_iomem.t_data_nents_orig = cmd->t_iomem.t_data_nents;
+	cmd->t_iomem.t_data_nents = 1;
 
 	cmd->sam_task_attr = TCM_HEAD_TAG;
 	cmd->transport_complete_callback = compare_and_write_post;
@@ -645,8 +648,8 @@ sbc_compare_and_write(struct se_cmd *cmd)
 	 */
 	cmd->data_length = cmd->t_task_nolb * dev->dev_attrib.block_size;
 
-	ret = ops->execute_rw(cmd, cmd->t_bidi_data_sg, cmd->t_bidi_data_nents,
-			      DMA_FROM_DEVICE);
+	ret = ops->execute_rw(cmd, cmd->t_iomem.t_bidi_data_sg,
+			      cmd->t_iomem.t_bidi_data_nents, DMA_FROM_DEVICE);
 	if (ret) {
 		cmd->transport_complete_callback = NULL;
 		up(&dev->caw_sem);
@@ -730,7 +733,7 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
 	int pi_prot_type = dev->dev_attrib.pi_prot_type;
 	bool fabric_prot = false;
 
-	if (!cmd->t_prot_sg || !cmd->t_prot_nents) {
+	if (!cmd->t_iomem.t_prot_sg || !cmd->t_iomem.t_prot_nents) {
 		if (unlikely(protect &&
 		    !dev->dev_attrib.pi_prot_type && !cmd->se_sess->sess_prot_type)) {
 			pr_err("CDB contains protect bit, but device + fabric does"
@@ -1244,13 +1247,13 @@ sbc_dif_generate(struct se_cmd *cmd)
 {
 	struct se_device *dev = cmd->se_dev;
 	struct t10_pi_tuple *sdt;
-	struct scatterlist *dsg = cmd->t_data_sg, *psg;
+	struct scatterlist *dsg = cmd->t_iomem.t_data_sg, *psg;
 	sector_t sector = cmd->t_task_lba;
 	void *daddr, *paddr;
 	int i, j, offset = 0;
 	unsigned int block_size = dev->dev_attrib.block_size;
 
-	for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
+	for_each_sg(cmd->t_iomem.t_prot_sg, psg, cmd->t_iomem.t_prot_nents, i) {
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
 		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 
@@ -1362,7 +1365,7 @@ void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
 
 	left = sectors * dev->prot_length;
 
-	for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
+	for_each_sg(cmd->t_iomem.t_prot_sg, psg, cmd->t_iomem.t_prot_nents, i) {
 		unsigned int psg_len, copied = 0;
 
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
@@ -1399,7 +1402,7 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 {
 	struct se_device *dev = cmd->se_dev;
 	struct t10_pi_tuple *sdt;
-	struct scatterlist *dsg = cmd->t_data_sg;
+	struct scatterlist *dsg = cmd->t_iomem.t_data_sg;
 	sector_t sector = start;
 	void *daddr, *paddr;
 	int i;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 614ef3f..e1e7c49 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -720,7 +720,7 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
 
 	if (dev && dev->transport->transport_complete) {
 		dev->transport->transport_complete(cmd,
-				cmd->t_data_sg,
+				cmd->t_iomem.t_data_sg,
 				transport_get_sense_buffer(cmd));
 		if (cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE)
 			success = 1;
@@ -1400,10 +1400,10 @@ transport_generic_map_mem_to_cmd(struct se_cmd *cmd, struct scatterlist *sgl,
 		return TCM_INVALID_CDB_FIELD;
 	}
 
-	cmd->t_data_sg = sgl;
-	cmd->t_data_nents = sgl_count;
-	cmd->t_bidi_data_sg = sgl_bidi;
-	cmd->t_bidi_data_nents = sgl_bidi_count;
+	cmd->t_iomem.t_data_sg = sgl;
+	cmd->t_iomem.t_data_nents = sgl_count;
+	cmd->t_iomem.t_bidi_data_sg = sgl_bidi;
+	cmd->t_iomem.t_bidi_data_nents = sgl_bidi_count;
 
 	cmd->se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
 	return 0;
@@ -1503,8 +1503,8 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
 	 * if present.
 	 */
 	if (sgl_prot_count) {
-		se_cmd->t_prot_sg = sgl_prot;
-		se_cmd->t_prot_nents = sgl_prot_count;
+		se_cmd->t_iomem.t_prot_sg = sgl_prot;
+		se_cmd->t_iomem.t_prot_nents = sgl_prot_count;
 		se_cmd->se_cmd_flags |= SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC;
 	}
 
@@ -1821,7 +1821,7 @@ static int target_write_prot_action(struct se_cmd *cmd)
 
 		sectors = cmd->data_length >> ilog2(cmd->se_dev->dev_attrib.block_size);
 		cmd->pi_err = sbc_dif_verify(cmd, cmd->t_task_lba,
-					     sectors, 0, cmd->t_prot_sg, 0);
+					     sectors, 0, cmd->t_iomem.t_prot_sg, 0);
 		if (unlikely(cmd->pi_err)) {
 			spin_lock_irq(&cmd->t_state_lock);
 			cmd->transport_state &= ~(CMD_T_BUSY|CMD_T_SENT);
@@ -2051,8 +2051,8 @@ static bool target_read_prot_action(struct se_cmd *cmd)
 				  ilog2(cmd->se_dev->dev_attrib.block_size);
 
 			cmd->pi_err = sbc_dif_verify(cmd, cmd->t_task_lba,
-						     sectors, 0, cmd->t_prot_sg,
-						     0);
+						     sectors, 0,
+						     cmd->t_iomem.t_prot_sg, 0);
 			if (cmd->pi_err)
 				return true;
 		}
@@ -2216,22 +2216,22 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
 	 * Check for saved t_data_sg that may be used for COMPARE_AND_WRITE
 	 * emulation, and free + reset pointers if necessary..
 	 */
-	if (!cmd->t_data_sg_orig)
+	if (!cmd->t_iomem.t_data_sg_orig)
 		return;
 
-	kfree(cmd->t_data_sg);
-	cmd->t_data_sg = cmd->t_data_sg_orig;
-	cmd->t_data_sg_orig = NULL;
-	cmd->t_data_nents = cmd->t_data_nents_orig;
-	cmd->t_data_nents_orig = 0;
+	kfree(cmd->t_iomem.t_data_sg);
+	cmd->t_iomem.t_data_sg = cmd->t_iomem.t_data_sg_orig;
+	cmd->t_iomem.t_data_sg_orig = NULL;
+	cmd->t_iomem.t_data_nents = cmd->t_iomem.t_data_nents_orig;
+	cmd->t_iomem.t_data_nents_orig = 0;
 }
 
 static inline void transport_free_pages(struct se_cmd *cmd)
 {
 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
-		target_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
-		cmd->t_prot_sg = NULL;
-		cmd->t_prot_nents = 0;
+		target_free_sgl(cmd->t_iomem.t_prot_sg, cmd->t_iomem.t_prot_nents);
+		cmd->t_iomem.t_prot_sg = NULL;
+		cmd->t_iomem.t_prot_nents = 0;
 	}
 
 	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
@@ -2240,23 +2240,23 @@ static inline void transport_free_pages(struct se_cmd *cmd)
 		 * SG_TO_MEM_NOALLOC to function with COMPARE_AND_WRITE
 		 */
 		if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) {
-			target_free_sgl(cmd->t_bidi_data_sg,
-					   cmd->t_bidi_data_nents);
-			cmd->t_bidi_data_sg = NULL;
-			cmd->t_bidi_data_nents = 0;
+			target_free_sgl(cmd->t_iomem.t_bidi_data_sg,
+					cmd->t_iomem.t_bidi_data_nents);
+			cmd->t_iomem.t_bidi_data_sg = NULL;
+			cmd->t_iomem.t_bidi_data_nents = 0;
 		}
 		transport_reset_sgl_orig(cmd);
 		return;
 	}
 	transport_reset_sgl_orig(cmd);
 
-	target_free_sgl(cmd->t_data_sg, cmd->t_data_nents);
-	cmd->t_data_sg = NULL;
-	cmd->t_data_nents = 0;
+	target_free_sgl(cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents);
+	cmd->t_iomem.t_data_sg = NULL;
+	cmd->t_iomem.t_data_nents = 0;
 
-	target_free_sgl(cmd->t_bidi_data_sg, cmd->t_bidi_data_nents);
-	cmd->t_bidi_data_sg = NULL;
-	cmd->t_bidi_data_nents = 0;
+	target_free_sgl(cmd->t_iomem.t_bidi_data_sg, cmd->t_iomem.t_bidi_data_nents);
+	cmd->t_iomem.t_bidi_data_sg = NULL;
+	cmd->t_iomem.t_bidi_data_nents = 0;
 }
 
 /**
@@ -2277,7 +2277,7 @@ static int transport_put_cmd(struct se_cmd *cmd)
 
 void *transport_kmap_data_sg(struct se_cmd *cmd)
 {
-	struct scatterlist *sg = cmd->t_data_sg;
+	struct scatterlist *sg = cmd->t_iomem.t_data_sg;
 	struct page **pages;
 	int i;
 
@@ -2286,43 +2286,44 @@ void *transport_kmap_data_sg(struct se_cmd *cmd)
 	 * tcm_loop who may be using a contig buffer from the SCSI midlayer for
 	 * control CDBs passed as SGLs via transport_generic_map_mem_to_cmd()
 	 */
-	if (!cmd->t_data_nents)
+	if (!cmd->t_iomem.t_data_nents)
 		return NULL;
 
 	BUG_ON(!sg);
-	if (cmd->t_data_nents == 1)
+	if (cmd->t_iomem.t_data_nents == 1)
 		return kmap(sg_page(sg)) + sg->offset;
 
 	/* >1 page. use vmap */
-	pages = kmalloc(sizeof(*pages) * cmd->t_data_nents, GFP_KERNEL);
+	pages = kmalloc(sizeof(*pages) * cmd->t_iomem.t_data_nents, GFP_KERNEL);
 	if (!pages)
 		return NULL;
 
 	/* convert sg[] to pages[] */
-	for_each_sg(cmd->t_data_sg, sg, cmd->t_data_nents, i) {
+	for_each_sg(cmd->t_iomem.t_data_sg, sg, cmd->t_iomem.t_data_nents, i) {
 		pages[i] = sg_page(sg);
 	}
 
-	cmd->t_data_vmap = vmap(pages, cmd->t_data_nents,  VM_MAP, PAGE_KERNEL);
+	cmd->t_iomem.t_data_vmap = vmap(pages, cmd->t_iomem.t_data_nents,
+					VM_MAP, PAGE_KERNEL);
 	kfree(pages);
-	if (!cmd->t_data_vmap)
+	if (!cmd->t_iomem.t_data_vmap)
 		return NULL;
 
-	return cmd->t_data_vmap + cmd->t_data_sg[0].offset;
+	return cmd->t_iomem.t_data_vmap + cmd->t_iomem.t_data_sg[0].offset;
 }
 EXPORT_SYMBOL(transport_kmap_data_sg);
 
 void transport_kunmap_data_sg(struct se_cmd *cmd)
 {
-	if (!cmd->t_data_nents) {
+	if (!cmd->t_iomem.t_data_nents) {
 		return;
-	} else if (cmd->t_data_nents == 1) {
-		kunmap(sg_page(cmd->t_data_sg));
+	} else if (cmd->t_iomem.t_data_nents == 1) {
+		kunmap(sg_page(cmd->t_iomem.t_data_sg));
 		return;
 	}
 
-	vunmap(cmd->t_data_vmap);
-	cmd->t_data_vmap = NULL;
+	vunmap(cmd->t_iomem.t_data_vmap);
+	cmd->t_iomem.t_data_vmap = NULL;
 }
 EXPORT_SYMBOL(transport_kunmap_data_sg);
 
@@ -2382,7 +2383,8 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 
 	if (cmd->prot_op != TARGET_PROT_NORMAL &&
 	    !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
-		ret = target_alloc_sgl(&cmd->t_prot_sg, &cmd->t_prot_nents,
+		ret = target_alloc_sgl(&cmd->t_iomem.t_prot_sg,
+				       &cmd->t_iomem.t_prot_nents,
 				       cmd->prot_length, true, false);
 		if (ret < 0)
 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -2406,14 +2408,15 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 			else
 				bidi_length = cmd->data_length;
 
-			ret = target_alloc_sgl(&cmd->t_bidi_data_sg,
-					       &cmd->t_bidi_data_nents,
+			ret = target_alloc_sgl(&cmd->t_iomem.t_bidi_data_sg,
+					       &cmd->t_iomem.t_bidi_data_nents,
 					       bidi_length, zero_flag, false);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 		}
 
-		ret = target_alloc_sgl(&cmd->t_data_sg, &cmd->t_data_nents,
+		ret = target_alloc_sgl(&cmd->t_iomem.t_data_sg,
+				       &cmd->t_iomem.t_data_nents,
 				       cmd->data_length, zero_flag, false);
 		if (ret < 0)
 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -2426,8 +2429,8 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 		u32 caw_length = cmd->t_task_nolb *
 				 cmd->se_dev->dev_attrib.block_size;
 
-		ret = target_alloc_sgl(&cmd->t_bidi_data_sg,
-				       &cmd->t_bidi_data_nents,
+		ret = target_alloc_sgl(&cmd->t_iomem.t_bidi_data_sg,
+				       &cmd->t_iomem.t_bidi_data_nents,
 				       caw_length, zero_flag, false);
 		if (ret < 0)
 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 62bf4fe..5013611 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -415,8 +415,8 @@ static int tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
 	 * expensive to tell how many regions are freed in the bitmap
 	*/
 	base_command_size = max(offsetof(struct tcmu_cmd_entry,
-				req.iov[se_cmd->t_bidi_data_nents +
-					se_cmd->t_data_nents]),
+				req.iov[se_cmd->t_iomem.t_bidi_data_nents +
+					se_cmd->t_iomem.t_data_nents]),
 				sizeof(struct tcmu_cmd_entry));
 	command_size = base_command_size
 		+ round_up(scsi_command_size(se_cmd->t_task_cdb), TCMU_OP_ALIGN_SIZE);
@@ -429,8 +429,9 @@ static int tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
 	cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
 	data_length = se_cmd->data_length;
 	if (se_cmd->se_cmd_flags & SCF_BIDI) {
-		BUG_ON(!(se_cmd->t_bidi_data_sg && se_cmd->t_bidi_data_nents));
-		data_length += se_cmd->t_bidi_data_sg->length;
+		BUG_ON(!(se_cmd->t_iomem.t_bidi_data_sg &&
+			 se_cmd->t_iomem.t_bidi_data_nents));
+		data_length += se_cmd->t_iomem.t_bidi_data_sg->length;
 	}
 	if ((command_size > (udev->cmdr_size / 2))
 	    || data_length > udev->data_size)
@@ -494,15 +495,15 @@ static int tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
 	iov_cnt = 0;
 	copy_to_data_area = (se_cmd->data_direction == DMA_TO_DEVICE
 		|| se_cmd->se_cmd_flags & SCF_BIDI);
-	alloc_and_scatter_data_area(udev, se_cmd->t_data_sg,
-		se_cmd->t_data_nents, &iov, &iov_cnt, copy_to_data_area);
+	alloc_and_scatter_data_area(udev, se_cmd->t_iomem.t_data_sg,
+		se_cmd->t_iomem.t_data_nents, &iov, &iov_cnt, copy_to_data_area);
 	entry->req.iov_cnt = iov_cnt;
 	entry->req.iov_dif_cnt = 0;
 
 	/* Handle BIDI commands */
 	iov_cnt = 0;
-	alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
-		se_cmd->t_bidi_data_nents, &iov, &iov_cnt, false);
+	alloc_and_scatter_data_area(udev, se_cmd->t_iomem.t_bidi_data_sg,
+		se_cmd->t_iomem.t_bidi_data_nents, &iov, &iov_cnt, false);
 	entry->req.iov_bidi_cnt = iov_cnt;
 
 	/* cmd's data_bitmap is what changed in process */
@@ -584,14 +585,14 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
 		/* Get Data-In buffer before clean up */
 		bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
 		gather_data_area(udev, bitmap,
-			se_cmd->t_bidi_data_sg, se_cmd->t_bidi_data_nents);
+			se_cmd->t_iomem.t_bidi_data_sg, se_cmd->t_iomem.t_bidi_data_nents);
 		free_data_area(udev, cmd);
 	} else if (se_cmd->data_direction == DMA_FROM_DEVICE) {
 		DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
 
 		bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
 		gather_data_area(udev, bitmap,
-			se_cmd->t_data_sg, se_cmd->t_data_nents);
+			se_cmd->t_iomem.t_data_sg, se_cmd->t_iomem.t_data_nents);
 		free_data_area(udev, cmd);
 	} else if (se_cmd->data_direction == DMA_TO_DEVICE) {
 		free_data_area(udev, cmd);
diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
index 75cd854..0a4bd8a 100644
--- a/drivers/target/target_core_xcopy.c
+++ b/drivers/target/target_core_xcopy.c
@@ -562,7 +562,8 @@ static int target_xcopy_setup_pt_cmd(
 	}
 
 	if (alloc_mem) {
-		rc = target_alloc_sgl(&cmd->t_data_sg, &cmd->t_data_nents,
+		rc = target_alloc_sgl(&cmd->t_iomem.t_data_sg,
+				      &cmd->t_iomem.t_data_nents,
 				      cmd->data_length, false, false);
 		if (rc < 0) {
 			ret = rc;
@@ -588,7 +589,7 @@ static int target_xcopy_setup_pt_cmd(
 		}
 
 		pr_debug("Setup PASSTHROUGH_NOALLOC t_data_sg: %p t_data_nents:"
-			 " %u\n", cmd->t_data_sg, cmd->t_data_nents);
+			 " %u\n", cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents);
 	}
 
 	return 0;
@@ -657,8 +658,8 @@ static int target_xcopy_read_source(
 		return rc;
 	}
 
-	xop->xop_data_sg = se_cmd->t_data_sg;
-	xop->xop_data_nents = se_cmd->t_data_nents;
+	xop->xop_data_sg = se_cmd->t_iomem.t_data_sg;
+	xop->xop_data_nents = se_cmd->t_iomem.t_data_nents;
 	pr_debug("XCOPY-READ: Saved xop->xop_data_sg: %p, num: %u for READ"
 		" memory\n", xop->xop_data_sg, xop->xop_data_nents);
 
@@ -671,8 +672,8 @@ static int target_xcopy_read_source(
 	 * Clear off the allocated t_data_sg, that has been saved for
 	 * zero-copy WRITE submission reuse in struct xcopy_op..
 	 */
-	se_cmd->t_data_sg = NULL;
-	se_cmd->t_data_nents = 0;
+	se_cmd->t_iomem.t_data_sg = NULL;
+	se_cmd->t_iomem.t_data_nents = 0;
 
 	return 0;
 }
@@ -720,8 +721,8 @@ static int target_xcopy_write_destination(
 		 * core releases this memory on error during X-COPY WRITE I/O.
 		 */
 		src_cmd->se_cmd_flags &= ~SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
-		src_cmd->t_data_sg = xop->xop_data_sg;
-		src_cmd->t_data_nents = xop->xop_data_nents;
+		src_cmd->t_iomem.t_data_sg = xop->xop_data_sg;
+		src_cmd->t_iomem.t_data_nents = xop->xop_data_nents;
 
 		transport_generic_free_cmd(se_cmd, 0);
 		return rc;
diff --git a/drivers/target/tcm_fc/tfc_cmd.c b/drivers/target/tcm_fc/tfc_cmd.c
index 216e18c..04c98d0 100644
--- a/drivers/target/tcm_fc/tfc_cmd.c
+++ b/drivers/target/tcm_fc/tfc_cmd.c
@@ -55,10 +55,10 @@ static void _ft_dump_cmd(struct ft_cmd *cmd, const char *caller)
 		caller, cmd, cmd->sess, cmd->seq, se_cmd);
 
 	pr_debug("%s: cmd %p data_nents %u len %u se_cmd_flags <0x%x>\n",
-		caller, cmd, se_cmd->t_data_nents,
+		caller, cmd, se_cmd->t_iomem.t_data_nents,
 	       se_cmd->data_length, se_cmd->se_cmd_flags);
 
-	for_each_sg(se_cmd->t_data_sg, sg, se_cmd->t_data_nents, count)
+	for_each_sg(se_cmd->t_iomem.t_data_sg, sg, se_cmd->t_iomem.t_data_nents, count)
 		pr_debug("%s: cmd %p sg %p page %p "
 			"len 0x%x off 0x%x\n",
 			caller, cmd, sg,
@@ -237,8 +237,8 @@ int ft_write_pending(struct se_cmd *se_cmd)
 		    (fh->fh_r_ctl == FC_RCTL_DD_DATA_DESC)) {
 			if ((se_cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) &&
 			    lport->tt.ddp_target(lport, ep->xid,
-						 se_cmd->t_data_sg,
-						 se_cmd->t_data_nents))
+						 se_cmd->t_iomem.t_data_sg,
+						 se_cmd->t_iomem.t_data_nents))
 				cmd->was_ddp_setup = 1;
 		}
 	}
diff --git a/drivers/target/tcm_fc/tfc_io.c b/drivers/target/tcm_fc/tfc_io.c
index 6f7c65a..86ae4c5 100644
--- a/drivers/target/tcm_fc/tfc_io.c
+++ b/drivers/target/tcm_fc/tfc_io.c
@@ -89,9 +89,9 @@ int ft_queue_data_in(struct se_cmd *se_cmd)
 	/*
 	 * Setup to use first mem list entry, unless no data.
 	 */
-	BUG_ON(remaining && !se_cmd->t_data_sg);
+	BUG_ON(remaining && !se_cmd->t_iomem.t_data_sg);
 	if (remaining) {
-		sg = se_cmd->t_data_sg;
+		sg = se_cmd->t_iomem.t_data_sg;
 		mem_len = sg->length;
 		mem_off = sg->offset;
 		page = sg_page(sg);
@@ -248,7 +248,8 @@ void ft_recv_write_data(struct ft_cmd *cmd, struct fc_frame *fp)
 				"payload, Frame will be dropped if"
 				"'Sequence Initiative' bit in f_ctl is"
 				"not set\n", __func__, ep->xid, f_ctl,
-				se_cmd->t_data_sg, se_cmd->t_data_nents);
+				se_cmd->t_iomem.t_data_sg,
+				se_cmd->t_iomem.t_data_nents);
 		/*
 		 * Invalidate HW DDP context if it was setup for respective
 		 * command. Invalidation of HW DDP context is requited in both
@@ -286,9 +287,9 @@ void ft_recv_write_data(struct ft_cmd *cmd, struct fc_frame *fp)
 	/*
 	 * Setup to use first mem list entry, unless no data.
 	 */
-	BUG_ON(frame_len && !se_cmd->t_data_sg);
+	BUG_ON(frame_len && !se_cmd->t_iomem.t_data_sg);
 	if (frame_len) {
-		sg = se_cmd->t_data_sg;
+		sg = se_cmd->t_iomem.t_data_sg;
 		mem_len = sg->length;
 		mem_off = sg->offset;
 		page = sg_page(sg);
diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
index 35fe3c8..8986132 100644
--- a/drivers/usb/gadget/function/f_tcm.c
+++ b/drivers/usb/gadget/function/f_tcm.c
@@ -217,16 +217,16 @@ static int bot_send_read_response(struct usbg_cmd *cmd)
 		if (!cmd->data_buf)
 			return -ENOMEM;
 
-		sg_copy_to_buffer(se_cmd->t_data_sg,
-				se_cmd->t_data_nents,
+		sg_copy_to_buffer(se_cmd->t_iomem.t_data_sg,
+				se_cmd->t_iomem.t_data_nents,
 				cmd->data_buf,
 				se_cmd->data_length);
 
 		fu->bot_req_in->buf = cmd->data_buf;
 	} else {
 		fu->bot_req_in->buf = NULL;
-		fu->bot_req_in->num_sgs = se_cmd->t_data_nents;
-		fu->bot_req_in->sg = se_cmd->t_data_sg;
+		fu->bot_req_in->num_sgs = se_cmd->t_iomem.t_data_nents;
+		fu->bot_req_in->sg = se_cmd->t_iomem.t_data_sg;
 	}
 
 	fu->bot_req_in->complete = bot_read_compl;
@@ -264,8 +264,8 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
 		fu->bot_req_out->buf = cmd->data_buf;
 	} else {
 		fu->bot_req_out->buf = NULL;
-		fu->bot_req_out->num_sgs = se_cmd->t_data_nents;
-		fu->bot_req_out->sg = se_cmd->t_data_sg;
+		fu->bot_req_out->num_sgs = se_cmd->t_iomem.t_data_nents;
+		fu->bot_req_out->sg = se_cmd->t_iomem.t_data_sg;
 	}
 
 	fu->bot_req_out->complete = usbg_data_write_cmpl;
@@ -519,16 +519,16 @@ static int uasp_prepare_r_request(struct usbg_cmd *cmd)
 		if (!cmd->data_buf)
 			return -ENOMEM;
 
-		sg_copy_to_buffer(se_cmd->t_data_sg,
-				se_cmd->t_data_nents,
+		sg_copy_to_buffer(se_cmd->t_iomem.t_data_sg,
+				se_cmd->t_iomem.t_data_nents,
 				cmd->data_buf,
 				se_cmd->data_length);
 
 		stream->req_in->buf = cmd->data_buf;
 	} else {
 		stream->req_in->buf = NULL;
-		stream->req_in->num_sgs = se_cmd->t_data_nents;
-		stream->req_in->sg = se_cmd->t_data_sg;
+		stream->req_in->num_sgs = se_cmd->t_iomem.t_data_nents;
+		stream->req_in->sg = se_cmd->t_iomem.t_data_sg;
 	}
 
 	stream->req_in->complete = uasp_status_data_cmpl;
@@ -960,8 +960,8 @@ static void usbg_data_write_cmpl(struct usb_ep *ep, struct usb_request *req)
 	}
 
 	if (req->num_sgs == 0) {
-		sg_copy_from_buffer(se_cmd->t_data_sg,
-				se_cmd->t_data_nents,
+		sg_copy_from_buffer(se_cmd->t_iomem.t_data_sg,
+				se_cmd->t_iomem.t_data_nents,
 				cmd->data_buf,
 				se_cmd->data_length);
 	}
@@ -987,8 +987,8 @@ static int usbg_prepare_w_request(struct usbg_cmd *cmd, struct usb_request *req)
 		req->buf = cmd->data_buf;
 	} else {
 		req->buf = NULL;
-		req->num_sgs = se_cmd->t_data_nents;
-		req->sg = se_cmd->t_data_sg;
+		req->num_sgs = se_cmd->t_iomem.t_data_nents;
+		req->sg = se_cmd->t_iomem.t_data_sg;
 	}
 
 	req->complete = usbg_data_write_cmpl;
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index b316b44..29ee45b 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -432,6 +432,23 @@ enum target_core_dif_check {
 #define TCM_ORDERED_TAG	0x22
 #define TCM_ACA_TAG	0x24
 
+struct target_iomem {
+	/* Used to store READ/WRITE payloads */
+	struct scatterlist	*t_data_sg;
+	unsigned int		t_data_nents;
+	/* Used to store COMPARE_AND_WRITE payload */
+	struct scatterlist	*t_data_sg_orig;
+	unsigned int		t_data_nents_orig;
+	/* Used to map payload for CONTROL CDB emulation */
+	void			*t_data_vmap;
+	/* Used to store bidirectional READ payload */
+	struct scatterlist	*t_bidi_data_sg;
+	unsigned int		t_bidi_data_nents;
+	/* Used to store T10-PI payload */
+	struct scatterlist	*t_prot_sg;
+	unsigned int		t_prot_nents;
+};
+
 struct se_cmd {
 	/* SAM response code being sent to initiator */
 	u8			scsi_status;
@@ -495,14 +512,7 @@ struct se_cmd {
 	struct completion	t_transport_stop_comp;
 
 	struct work_struct	work;
-
-	struct scatterlist	*t_data_sg;
-	struct scatterlist	*t_data_sg_orig;
-	unsigned int		t_data_nents;
-	unsigned int		t_data_nents_orig;
-	void			*t_data_vmap;
-	struct scatterlist	*t_bidi_data_sg;
-	unsigned int		t_bidi_data_nents;
+	struct target_iomem	t_iomem;
 
 	/* Used for lun->lun_ref counting */
 	int			lun_ref_active;
@@ -519,8 +529,6 @@ struct se_cmd {
 	bool			prot_pto;
 	u32			prot_length;
 	u32			reftag_seed;
-	struct scatterlist	*t_prot_sg;
-	unsigned int		t_prot_nents;
 	sense_reason_t		pi_err;
 	sector_t		bad_sector;
 	int			cpuid;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 03/14] target: Add target_iostate descriptor
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 01/14] target: Fix for hang of Ordered task in TCM Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 02/14] target: Add target_iomem descriptor Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 04/14] target: Add target_complete_ios wrapper Nicholas A. Bellinger
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch introduces a new struct target_iostate descriptor
containing logical block address, length in bytes, data_direction,
T10-PI, and target_iostate completion callback.

This includes:

    - t_task_* // Used for LBA + Number of LBAs
    - data_* // Used for length and direction
    - prot_* // T10-PI related
    - reftag_seed // T10-PI related
    - bad_sector // T10-PI related
    - iomem // Pointer to struct target_iomem descriptor
    - se_dev // Pointer to struct se_device backend
    - t_comp_func // Pointer to submission callback
    - priv // Used by IBLOCK, dropped in seperate patch

It also includes the associated mechanical conversion tree-wide
within target backend and fabric driver code.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/infiniband/ulp/isert/ib_isert.c           |  37 +++---
 drivers/infiniband/ulp/srpt/ib_srpt.c             |   6 +-
 drivers/scsi/qla2xxx/qla_target.c                 |  60 ++++-----
 drivers/scsi/qla2xxx/tcm_qla2xxx.c                |  10 +-
 drivers/target/iscsi/cxgbit/cxgbit_ddp.c          |   4 +-
 drivers/target/iscsi/cxgbit/cxgbit_target.c       |  10 +-
 drivers/target/iscsi/iscsi_target.c               |  22 ++--
 drivers/target/iscsi/iscsi_target_datain_values.c |  18 +--
 drivers/target/iscsi/iscsi_target_erl0.c          |  24 ++--
 drivers/target/iscsi/iscsi_target_erl1.c          |   8 +-
 drivers/target/iscsi/iscsi_target_seq_pdu_list.c  |  40 +++---
 drivers/target/iscsi/iscsi_target_tmr.c           |   4 +-
 drivers/target/iscsi/iscsi_target_util.c          |   4 +-
 drivers/target/loopback/tcm_loop.c                |   2 +-
 drivers/target/sbp/sbp_target.c                   |   4 +-
 drivers/target/target_core_alua.c                 |  34 ++---
 drivers/target/target_core_device.c               |  22 ++--
 drivers/target/target_core_file.c                 |  48 +++----
 drivers/target/target_core_iblock.c               |  13 +-
 drivers/target/target_core_pr.c                   |  56 ++++----
 drivers/target/target_core_pscsi.c                |  12 +-
 drivers/target/target_core_rd.c                   |  18 +--
 drivers/target/target_core_sbc.c                  | 152 +++++++++++-----------
 drivers/target/target_core_spc.c                  |  28 ++--
 drivers/target/target_core_transport.c            | 106 +++++++--------
 drivers/target/target_core_user.c                 |  12 +-
 drivers/target/target_core_xcopy.c                |  10 +-
 drivers/target/tcm_fc/tfc_cmd.c                   |   6 +-
 drivers/target/tcm_fc/tfc_io.c                    |  10 +-
 drivers/usb/gadget/function/f_tcm.c               |  22 ++--
 drivers/vhost/scsi.c                              |   2 +-
 include/target/target_core_base.h                 |  39 ++++--
 include/target/target_core_fabric.h               |   2 +-
 include/trace/events/target.h                     |   4 +-
 34 files changed, 433 insertions(+), 416 deletions(-)

diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 2c41a8b..861d0e0 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -65,7 +65,7 @@ static inline bool
 isert_prot_cmd(struct isert_conn *conn, struct se_cmd *cmd)
 {
 	return (conn->pi_support &&
-		cmd->prot_op != TARGET_PROT_NORMAL);
+		cmd->t_iostate.prot_op != TARGET_PROT_NORMAL);
 }
 
 
@@ -1111,7 +1111,7 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
 	imm_data = cmd->immediate_data;
 	imm_data_len = cmd->first_burst_len;
 	unsol_data = cmd->unsolicited_data;
-	data_len = cmd->se_cmd.data_length;
+	data_len = cmd->se_cmd.t_iostate.data_length;
 
 	if (imm_data && imm_data_len == data_len)
 		cmd->se_cmd.se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
@@ -1143,7 +1143,7 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
 
 	cmd->write_data_done += imm_data_len;
 
-	if (cmd->write_data_done == cmd->se_cmd.data_length) {
+	if (cmd->write_data_done == cmd->se_cmd.t_iostate.data_length) {
 		spin_lock_bh(&cmd->istate_lock);
 		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
 		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
@@ -1189,7 +1189,7 @@ isert_handle_iscsi_dataout(struct isert_conn *isert_conn,
 	isert_dbg("Unsolicited DataOut unsol_data_len: %u, "
 		  "write_data_done: %u, data_length: %u\n",
 		  unsol_data_len,  cmd->write_data_done,
-		  cmd->se_cmd.data_length);
+		  cmd->se_cmd.t_iostate.data_length);
 
 	sg_off = cmd->write_data_done / PAGE_SIZE;
 	sg_start = &cmd->se_cmd.t_iomem.t_data_sg[sg_off];
@@ -1614,12 +1614,12 @@ isert_check_pi_status(struct se_cmd *se_cmd, struct ib_mr *sig_mr)
 		}
 		sec_offset_err = mr_status.sig_err.sig_err_offset;
 		do_div(sec_offset_err, block_size);
-		se_cmd->bad_sector = sec_offset_err + se_cmd->t_task_lba;
+		se_cmd->t_iostate.bad_sector = sec_offset_err + se_cmd->t_iostate.t_task_lba;
 
 		isert_err("PI error found type %d at sector 0x%llx "
 			  "expected 0x%x vs actual 0x%x\n",
 			  mr_status.sig_err.err_type,
-			  (unsigned long long)se_cmd->bad_sector,
+			  (unsigned long long)se_cmd->t_iostate.bad_sector,
 			  mr_status.sig_err.expected,
 			  mr_status.sig_err.actual);
 		ret = 1;
@@ -2025,7 +2025,7 @@ isert_set_dif_domain(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs,
 	domain->sig_type = IB_SIG_TYPE_T10_DIF;
 	domain->sig.dif.bg_type = IB_T10DIF_CRC;
 	domain->sig.dif.pi_interval = se_cmd->se_dev->dev_attrib.block_size;
-	domain->sig.dif.ref_tag = se_cmd->reftag_seed;
+	domain->sig.dif.ref_tag = se_cmd->t_iostate.reftag_seed;
 	/*
 	 * At the moment we hard code those, but if in the future
 	 * the target core would like to use it, we will take it
@@ -2034,17 +2034,19 @@ isert_set_dif_domain(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs,
 	domain->sig.dif.apptag_check_mask = 0xffff;
 	domain->sig.dif.app_escape = true;
 	domain->sig.dif.ref_escape = true;
-	if (se_cmd->prot_type == TARGET_DIF_TYPE1_PROT ||
-	    se_cmd->prot_type == TARGET_DIF_TYPE2_PROT)
+	if (se_cmd->t_iostate.prot_type == TARGET_DIF_TYPE1_PROT ||
+	    se_cmd->t_iostate.prot_type == TARGET_DIF_TYPE2_PROT)
 		domain->sig.dif.ref_remap = true;
 };
 
 static int
 isert_set_sig_attrs(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs)
 {
+	struct target_iostate *ios = &se_cmd->t_iostate;
+
 	memset(sig_attrs, 0, sizeof(*sig_attrs));
 
-	switch (se_cmd->prot_op) {
+	switch (se_cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DIN_INSERT:
 	case TARGET_PROT_DOUT_STRIP:
 		sig_attrs->mem.sig_type = IB_SIG_TYPE_NONE;
@@ -2061,14 +2063,14 @@ isert_set_sig_attrs(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs)
 		isert_set_dif_domain(se_cmd, sig_attrs, &sig_attrs->mem);
 		break;
 	default:
-		isert_err("Unsupported PI operation %d\n", se_cmd->prot_op);
+		isert_err("Unsupported PI operation %d\n", se_cmd->t_iostate.prot_op);
 		return -EINVAL;
 	}
 
 	sig_attrs->check_mask =
-	       (se_cmd->prot_checks & TARGET_DIF_CHECK_GUARD  ? 0xc0 : 0) |
-	       (se_cmd->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x30 : 0) |
-	       (se_cmd->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x0f : 0);
+	       (ios->prot_checks & TARGET_DIF_CHECK_GUARD  ? 0xc0 : 0) |
+	       (ios->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x30 : 0) |
+	       (ios->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x0f : 0);
 	return 0;
 }
 
@@ -2133,7 +2135,7 @@ isert_put_datain(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
 	int rc;
 
 	isert_dbg("Cmd: %p RDMA_WRITE data_length: %u\n",
-		 isert_cmd, se_cmd->data_length);
+		 isert_cmd, se_cmd->t_iostate.data_length);
 
 	if (isert_prot_cmd(isert_conn, se_cmd)) {
 		isert_cmd->tx_desc.tx_cqe.done = isert_rdma_write_done;
@@ -2168,9 +2170,10 @@ static int
 isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery)
 {
 	struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
+	struct se_cmd *se_cmd = &cmd->se_cmd;
 
 	isert_dbg("Cmd: %p RDMA_READ data_length: %u write_data_done: %u\n",
-		 isert_cmd, cmd->se_cmd.data_length, cmd->write_data_done);
+		 isert_cmd, se_cmd->t_iostate.data_length, cmd->write_data_done);
 
 	isert_cmd->tx_desc.tx_cqe.done = isert_rdma_read_done;
 	isert_rdma_rw_ctx_post(isert_cmd, conn->context,
@@ -2556,7 +2559,7 @@ isert_put_unsol_pending_cmds(struct iscsi_conn *conn)
 	list_for_each_entry_safe(cmd, tmp, &conn->conn_cmd_list, i_conn_node) {
 		if ((cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA) &&
 		    (cmd->write_data_done < conn->sess->sess_ops->FirstBurstLength) &&
-		    (cmd->write_data_done < cmd->se_cmd.data_length))
+		    (cmd->write_data_done < cmd->se_cmd.t_iostate.data_length))
 			list_move_tail(&cmd->i_conn_node, &drop_cmd_list);
 	}
 	spin_unlock_bh(&conn->cmd_lock);
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index e68b20cb..8f675a7 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -937,7 +937,7 @@ static int srpt_get_desc_tbl(struct srpt_send_ioctx *ioctx,
 		*dir = DMA_NONE;
 
 	/* initialize data_direction early as srpt_alloc_rw_ctxs needs it */
-	ioctx->cmd.data_direction = *dir;
+	ioctx->cmd.t_iostate.data_direction = *dir;
 
 	if (((srp_cmd->buf_fmt & 0xf) == SRP_DATA_DESC_DIRECT) ||
 	    ((srp_cmd->buf_fmt >> 4) == SRP_DATA_DESC_DIRECT)) {
@@ -2296,8 +2296,8 @@ static void srpt_queue_response(struct se_cmd *cmd)
 	}
 
 	/* For read commands, transfer the data to the initiator. */
-	if (ioctx->cmd.data_direction == DMA_FROM_DEVICE &&
-	    ioctx->cmd.data_length &&
+	if (ioctx->cmd.t_iostate.data_direction == DMA_FROM_DEVICE &&
+	    ioctx->cmd.t_iostate.data_length &&
 	    !ioctx->queue_status_only) {
 		for (i = ioctx->n_rw_ctx - 1; i >= 0; i--) {
 			struct srpt_rw_ctx *ctx = &ioctx->rw_ctxs[i];
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index f93bd5f..a1ab13f 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -1808,7 +1808,7 @@ static int qlt_pci_map_calc_cnt(struct qla_tgt_prm *prm)
 
 	prm->cmd->sg_mapped = 1;
 
-	if (cmd->se_cmd.prot_op == TARGET_PROT_NORMAL) {
+	if (cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_NORMAL) {
 		/*
 		 * If greater than four sg entries then we need to allocate
 		 * the continuation entries
@@ -1819,8 +1819,8 @@ static int qlt_pci_map_calc_cnt(struct qla_tgt_prm *prm)
 			prm->tgt->datasegs_per_cont);
 	} else {
 		/* DIF */
-		if ((cmd->se_cmd.prot_op == TARGET_PROT_DIN_INSERT) ||
-		    (cmd->se_cmd.prot_op == TARGET_PROT_DOUT_STRIP)) {
+		if ((cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_DIN_INSERT) ||
+		    (cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_DOUT_STRIP)) {
 			prm->seg_cnt = DIV_ROUND_UP(cmd->bufflen, cmd->blk_sz);
 			prm->tot_dsds = prm->seg_cnt;
 		} else
@@ -1834,8 +1834,8 @@ static int qlt_pci_map_calc_cnt(struct qla_tgt_prm *prm)
 			if (unlikely(prm->prot_seg_cnt == 0))
 				goto out_err;
 
-			if ((cmd->se_cmd.prot_op == TARGET_PROT_DIN_INSERT) ||
-			    (cmd->se_cmd.prot_op == TARGET_PROT_DOUT_STRIP)) {
+			if ((cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_DIN_INSERT) ||
+			    (cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_DOUT_STRIP)) {
 				/* Dif Bundling not support here */
 				prm->prot_seg_cnt = DIV_ROUND_UP(cmd->bufflen,
 								cmd->blk_sz);
@@ -2355,7 +2355,7 @@ qlt_hba_err_chk_enabled(struct se_cmd *se_cmd)
 	 return 0;
 	 *
 	 */
-	switch (se_cmd->prot_op) {
+	switch (se_cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DOUT_INSERT:
 	case TARGET_PROT_DIN_STRIP:
 		if (ql2xenablehba_err_chk >= 1)
@@ -2382,7 +2382,7 @@ qlt_hba_err_chk_enabled(struct se_cmd *se_cmd)
 static inline void
 qlt_set_t10dif_tags(struct se_cmd *se_cmd, struct crc_context *ctx)
 {
-	uint32_t lba = 0xffffffff & se_cmd->t_task_lba;
+	uint32_t lba = 0xffffffff & se_cmd->t_iostate.t_task_lba;
 
 	/* wait til Mode Sense/Select cmd, modepage Ah, subpage 2
 	 * have been immplemented by TCM, before AppTag is avail.
@@ -2392,7 +2392,7 @@ qlt_set_t10dif_tags(struct se_cmd *se_cmd, struct crc_context *ctx)
 	ctx->app_tag_mask[0] = 0x0;
 	ctx->app_tag_mask[1] = 0x0;
 
-	switch (se_cmd->prot_type) {
+	switch (se_cmd->t_iostate.prot_type) {
 	case TARGET_DIF_TYPE0_PROT:
 		/*
 		 * No check for ql2xenablehba_err_chk, as it would be an
@@ -2479,18 +2479,18 @@ qlt_build_ctio_crc2_pkt(struct qla_tgt_prm *prm, scsi_qla_host_t *vha)
 
 	ql_dbg(ql_dbg_tgt, vha, 0xe071,
 		"qla_target(%d):%s: se_cmd[%p] CRC2 prot_op[0x%x] cmd prot sg:cnt[%p:%x] lba[%llu]\n",
-		vha->vp_idx, __func__, se_cmd, se_cmd->prot_op,
-		prm->prot_sg, prm->prot_seg_cnt, se_cmd->t_task_lba);
+		vha->vp_idx, __func__, se_cmd, se_cmd->t_iostate.prot_op,
+		prm->prot_sg, prm->prot_seg_cnt, se_cmd->t_iostate.t_task_lba);
 
-	if ((se_cmd->prot_op == TARGET_PROT_DIN_INSERT) ||
-	    (se_cmd->prot_op == TARGET_PROT_DOUT_STRIP))
+	if ((se_cmd->t_iostate.prot_op == TARGET_PROT_DIN_INSERT) ||
+	    (se_cmd->t_iostate.prot_op == TARGET_PROT_DOUT_STRIP))
 		bundling = 0;
 
 	/* Compute dif len and adjust data len to incude protection */
 	data_bytes = cmd->bufflen;
 	dif_bytes  = (data_bytes / cmd->blk_sz) * 8;
 
-	switch (se_cmd->prot_op) {
+	switch (se_cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DIN_INSERT:
 	case TARGET_PROT_DOUT_STRIP:
 		transfer_length = data_bytes;
@@ -2513,14 +2513,14 @@ qlt_build_ctio_crc2_pkt(struct qla_tgt_prm *prm, scsi_qla_host_t *vha)
 		fw_prot_opts |= 0x10; /* Disable Guard tag checking */
 	/* HBA error checking enabled */
 	else if (IS_PI_UNINIT_CAPABLE(ha)) {
-		if ((se_cmd->prot_type == TARGET_DIF_TYPE1_PROT) ||
-		    (se_cmd->prot_type == TARGET_DIF_TYPE2_PROT))
+		if ((se_cmd->t_iostate.prot_type == TARGET_DIF_TYPE1_PROT) ||
+		    (se_cmd->t_iostate.prot_type == TARGET_DIF_TYPE2_PROT))
 			fw_prot_opts |= PO_DIS_VALD_APP_ESC;
-		else if (se_cmd->prot_type == TARGET_DIF_TYPE3_PROT)
+		else if (se_cmd->t_iostate.prot_type == TARGET_DIF_TYPE3_PROT)
 			fw_prot_opts |= PO_DIS_VALD_APP_REF_ESC;
 	}
 
-	switch (se_cmd->prot_op) {
+	switch (se_cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DIN_INSERT:
 	case TARGET_PROT_DOUT_INSERT:
 		fw_prot_opts |= PO_MODE_DIF_INSERT;
@@ -2732,7 +2732,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
 	if (unlikely(res))
 		goto out_unmap_unlock;
 
-	if (cmd->se_cmd.prot_op && (xmit_type & QLA_TGT_XMIT_DATA))
+	if (cmd->se_cmd.t_iostate.prot_op && (xmit_type & QLA_TGT_XMIT_DATA))
 		res = qlt_build_ctio_crc2_pkt(&prm, vha);
 	else
 		res = qlt_24xx_build_ctio_pkt(&prm, vha);
@@ -2748,7 +2748,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
 		    cpu_to_le16(CTIO7_FLAGS_DATA_IN |
 			CTIO7_FLAGS_STATUS_MODE_0);
 
-		if (cmd->se_cmd.prot_op == TARGET_PROT_NORMAL)
+		if (cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_NORMAL)
 			qlt_load_data_segments(&prm, vha);
 
 		if (prm.add_status_pkt == 0) {
@@ -2873,7 +2873,7 @@ int qlt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
 	res = qlt_check_reserve_free_req(vha, prm.req_cnt);
 	if (res != 0)
 		goto out_unlock_free_unmap;
-	if (cmd->se_cmd.prot_op)
+	if (cmd->se_cmd.t_iostate.prot_op)
 		res = qlt_build_ctio_crc2_pkt(&prm, vha);
 	else
 		res = qlt_24xx_build_ctio_pkt(&prm, vha);
@@ -2887,7 +2887,7 @@ int qlt_rdy_to_xfer(struct qla_tgt_cmd *cmd)
 	pkt->u.status0.flags |= cpu_to_le16(CTIO7_FLAGS_DATA_OUT |
 	    CTIO7_FLAGS_STATUS_MODE_0);
 
-	if (cmd->se_cmd.prot_op == TARGET_PROT_NORMAL)
+	if (cmd->se_cmd.t_iostate.prot_op == TARGET_PROT_NORMAL)
 		qlt_load_data_segments(&prm, vha);
 
 	cmd->state = QLA_TGT_STATE_NEED_DATA;
@@ -2922,7 +2922,7 @@ qlt_handle_dif_error(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
 	uint32_t	e_ref_tag, a_ref_tag;
 	uint16_t	e_app_tag, a_app_tag;
 	uint16_t	e_guard, a_guard;
-	uint64_t	lba = cmd->se_cmd.t_task_lba;
+	uint64_t	lba = cmd->se_cmd.t_iostate.t_task_lba;
 
 	a_guard   = be16_to_cpu(*(uint16_t *)(ap + 0));
 	a_app_tag = be16_to_cpu(*(uint16_t *)(ap + 2));
@@ -2946,13 +2946,13 @@ qlt_handle_dif_error(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
 	 * For type 0,1,2: app tag is all 'f's
 	 */
 	if ((a_app_tag == 0xffff) &&
-	    ((cmd->se_cmd.prot_type != TARGET_DIF_TYPE3_PROT) ||
+	    ((cmd->se_cmd.t_iostate.prot_type != TARGET_DIF_TYPE3_PROT) ||
 	     (a_ref_tag == 0xffffffff))) {
 		uint32_t blocks_done;
 
 		/* 2TB boundary case covered automatically with this */
 		blocks_done = e_ref_tag - (uint32_t)lba + 1;
-		cmd->se_cmd.bad_sector = e_ref_tag;
+		cmd->se_cmd.t_iostate.bad_sector = e_ref_tag;
 		cmd->se_cmd.pi_err = 0;
 		ql_dbg(ql_dbg_tgt, vha, 0xf074,
 			"need to return scsi good\n");
@@ -2993,7 +2993,7 @@ qlt_handle_dif_error(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
 			spt += j;
 
 			spt->app_tag = 0xffff;
-			if (cmd->se_cmd.prot_type == SCSI_PROT_DIF_TYPE3)
+			if (cmd->se_cmd.t_iostate.prot_type == SCSI_PROT_DIF_TYPE3)
 				spt->ref_tag = 0xffffffff;
 #endif
 		}
@@ -3004,7 +3004,7 @@ qlt_handle_dif_error(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
 	/* check guard */
 	if (e_guard != a_guard) {
 		cmd->se_cmd.pi_err = TCM_LOGICAL_BLOCK_GUARD_CHECK_FAILED;
-		cmd->se_cmd.bad_sector = cmd->se_cmd.t_task_lba;
+		cmd->se_cmd.t_iostate.bad_sector = cmd->se_cmd.t_iostate.t_task_lba;
 
 		ql_log(ql_log_warn, vha, 0xe076,
 		    "Guard ERR: cdb 0x%x lba 0x%llx: [Actual|Expected] Ref Tag[0x%x|0x%x], App Tag [0x%x|0x%x], Guard [0x%x|0x%x] cmd=%p\n",
@@ -3017,7 +3017,7 @@ qlt_handle_dif_error(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
 	/* check ref tag */
 	if (e_ref_tag != a_ref_tag) {
 		cmd->se_cmd.pi_err = TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED;
-		cmd->se_cmd.bad_sector = e_ref_tag;
+		cmd->se_cmd.t_iostate.bad_sector = e_ref_tag;
 
 		ql_log(ql_log_warn, vha, 0xe077,
 			"Ref Tag ERR: cdb 0x%x lba 0x%llx: [Actual|Expected] Ref Tag[0x%x|0x%x], App Tag [0x%x|0x%x], Guard [0x%x|0x%x] cmd=%p\n",
@@ -3030,7 +3030,7 @@ qlt_handle_dif_error(struct scsi_qla_host *vha, struct qla_tgt_cmd *cmd,
 	/* check appl tag */
 	if (e_app_tag != a_app_tag) {
 		cmd->se_cmd.pi_err = TCM_LOGICAL_BLOCK_APP_TAG_CHECK_FAILED;
-		cmd->se_cmd.bad_sector = cmd->se_cmd.t_task_lba;
+		cmd->se_cmd.t_iostate.bad_sector = cmd->se_cmd.t_iostate.t_task_lba;
 
 		ql_log(ql_log_warn, vha, 0xe078,
 			"App Tag ERR: cdb 0x%x lba 0x%llx: [Actual|Expected] Ref Tag[0x%x|0x%x], App Tag [0x%x|0x%x], Guard [0x%x|0x%x] cmd=%p\n",
@@ -4734,7 +4734,7 @@ static void qlt_handle_srr(struct scsi_qla_host *vha,
 			    "scsi_status\n");
 			goto out_reject;
 		}
-		cmd->bufflen = se_cmd->data_length;
+		cmd->bufflen = se_cmd->t_iostate.data_length;
 
 		if (qlt_has_data(cmd)) {
 			if (qlt_srr_adjust_data(cmd, offset, &xmit_type) != 0)
@@ -4766,7 +4766,7 @@ static void qlt_handle_srr(struct scsi_qla_host *vha,
 			    " with non GOOD scsi_status\n");
 			goto out_reject;
 		}
-		cmd->bufflen = se_cmd->data_length;
+		cmd->bufflen = se_cmd->t_iostate.data_length;
 
 		if (qlt_has_data(cmd)) {
 			if (qlt_srr_adjust_data(cmd, offset, &xmit_type) != 0)
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index dace993..935519d 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -378,7 +378,7 @@ static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd)
 		return 0;
 	}
 	cmd->cmd_flags |= BIT_3;
-	cmd->bufflen = se_cmd->data_length;
+	cmd->bufflen = se_cmd->t_iostate.data_length;
 	cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
 
 	cmd->sg_cnt = se_cmd->t_iomem.t_data_nents;
@@ -592,7 +592,7 @@ static int tcm_qla2xxx_queue_data_in(struct se_cmd *se_cmd)
 	}
 
 	cmd->cmd_flags |= BIT_4;
-	cmd->bufflen = se_cmd->data_length;
+	cmd->bufflen = se_cmd->t_iostate.data_length;
 	cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
 
 	cmd->sg_cnt = se_cmd->t_iomem.t_data_nents;
@@ -617,7 +617,7 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
 				struct qla_tgt_cmd, se_cmd);
 	int xmit_type = QLA_TGT_XMIT_STATUS;
 
-	cmd->bufflen = se_cmd->data_length;
+	cmd->bufflen = se_cmd->t_iostate.data_length;
 	cmd->sg = NULL;
 	cmd->sg_cnt = 0;
 	cmd->offset = 0;
@@ -628,7 +628,7 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
 	}
 	cmd->cmd_flags |= BIT_5;
 
-	if (se_cmd->data_direction == DMA_FROM_DEVICE) {
+	if (se_cmd->t_iostate.data_direction == DMA_FROM_DEVICE) {
 		/*
 		 * For FCP_READ with CHECK_CONDITION status, clear cmd->bufflen
 		 * for qla_tgt_xmit_response LLD code
@@ -638,7 +638,7 @@ static int tcm_qla2xxx_queue_status(struct se_cmd *se_cmd)
 			se_cmd->residual_count = 0;
 		}
 		se_cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT;
-		se_cmd->residual_count += se_cmd->data_length;
+		se_cmd->residual_count += se_cmd->t_iostate.data_length;
 
 		cmd->bufflen = 0;
 	}
diff --git a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
index a0c94b4..1313d68 100644
--- a/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
+++ b/drivers/target/iscsi/cxgbit/cxgbit_ddp.c
@@ -248,10 +248,10 @@ cxgbit_get_r2t_ttt(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
 	ttinfo->sgl = cmd->se_cmd.t_iomem.t_data_sg;
 	ttinfo->nents = cmd->se_cmd.t_iomem.t_data_nents;
 
-	ret = cxgbit_ddp_reserve(csk, ttinfo, cmd->se_cmd.data_length);
+	ret = cxgbit_ddp_reserve(csk, ttinfo, cmd->se_cmd.t_iostate.data_length);
 	if (ret < 0) {
 		pr_info("csk 0x%p, cmd 0x%p, xfer len %u, sgcnt %u no ddp.\n",
-			csk, cmd, cmd->se_cmd.data_length, ttinfo->nents);
+			csk, cmd, cmd->se_cmd.t_iostate.data_length, ttinfo->nents);
 
 		ttinfo->sgl = NULL;
 		ttinfo->nents = 0;
diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c
index ac86574..02f6875 100644
--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c
+++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c
@@ -413,7 +413,7 @@ cxgbit_tx_datain_iso(struct cxgbit_sock *csk, struct iscsi_cmd *cmd,
 	struct sk_buff *skb;
 	struct iscsi_datain datain;
 	struct cxgbit_iso_info iso_info;
-	u32 data_length = cmd->se_cmd.data_length;
+	u32 data_length = cmd->se_cmd.t_iostate.data_length;
 	u32 mrdsl = conn->conn_ops->MaxRecvDataSegmentLength;
 	u32 num_pdu, plen, tx_data = 0;
 	bool task_sense = !!(cmd->se_cmd.se_cmd_flags &
@@ -531,7 +531,7 @@ cxgbit_xmit_datain_pdu(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
 		       const struct iscsi_datain *datain)
 {
 	struct cxgbit_sock *csk = conn->context;
-	u32 data_length = cmd->se_cmd.data_length;
+	u32 data_length = cmd->se_cmd.t_iostate.data_length;
 	u32 padding = ((-data_length) & 3);
 	u32 mrdsl = conn->conn_ops->MaxRecvDataSegmentLength;
 
@@ -877,7 +877,7 @@ cxgbit_handle_immediate_data(struct iscsi_cmd *cmd, struct iscsi_scsi_req *hdr,
 
 	cmd->write_data_done += pdu_cb->dlen;
 
-	if (cmd->write_data_done == cmd->se_cmd.data_length) {
+	if (cmd->write_data_done == cmd->se_cmd.t_iostate.data_length) {
 		spin_lock_bh(&cmd->istate_lock);
 		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
 		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
@@ -954,7 +954,7 @@ cxgbit_handle_scsi_cmd(struct cxgbit_sock *csk, struct iscsi_cmd *cmd)
 	if (rc < 0)
 		return rc;
 
-	if (pdu_cb->dlen && (pdu_cb->dlen == cmd->se_cmd.data_length) &&
+	if (pdu_cb->dlen && (pdu_cb->dlen == cmd->se_cmd.t_iostate.data_length) &&
 	    (pdu_cb->nr_dfrags == 1))
 		cmd->se_cmd.se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
 
@@ -1001,7 +1001,7 @@ static int cxgbit_handle_iscsi_dataout(struct cxgbit_sock *csk)
 	pr_debug("DataOut data_len: %u, "
 		"write_data_done: %u, data_length: %u\n",
 		  data_len,  cmd->write_data_done,
-		  cmd->se_cmd.data_length);
+		  cmd->se_cmd.t_iostate.data_length);
 
 	if (!(pdu_cb->flags & PDUCBF_RX_DATA_DDPD)) {
 		sg_off = data_offset / PAGE_SIZE;
diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index 44388e3..4140344 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
@@ -978,7 +978,7 @@ static void iscsit_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn)
 
 static int iscsit_allocate_iovecs(struct iscsi_cmd *cmd)
 {
-	u32 iov_count = max(1UL, DIV_ROUND_UP(cmd->se_cmd.data_length, PAGE_SIZE));
+	u32 iov_count = max(1UL, DIV_ROUND_UP(cmd->se_cmd.t_iostate.data_length, PAGE_SIZE));
 
 	iov_count += ISCSI_IOV_DATA_BUFFER;
 
@@ -1478,10 +1478,10 @@ iscsit_check_dataout_hdr(struct iscsi_conn *conn, unsigned char *buf,
 	se_cmd = &cmd->se_cmd;
 	iscsit_mod_dataout_timer(cmd);
 
-	if ((be32_to_cpu(hdr->offset) + payload_length) > cmd->se_cmd.data_length) {
+	if ((be32_to_cpu(hdr->offset) + payload_length) > cmd->se_cmd.t_iostate.data_length) {
 		pr_err("DataOut Offset: %u, Length %u greater than"
 			" iSCSI Command EDTL %u, protocol error.\n",
-			hdr->offset, payload_length, cmd->se_cmd.data_length);
+			hdr->offset, payload_length, cmd->se_cmd.t_iostate.data_length);
 		return iscsit_reject_cmd(cmd, ISCSI_REASON_BOOKMARK_INVALID, buf);
 	}
 
@@ -2650,7 +2650,7 @@ static int iscsit_handle_immediate_data(
 
 	cmd->write_data_done += length;
 
-	if (cmd->write_data_done == cmd->se_cmd.data_length) {
+	if (cmd->write_data_done == cmd->se_cmd.t_iostate.data_length) {
 		spin_lock_bh(&cmd->istate_lock);
 		cmd->cmd_flags |= ICF_GOT_LAST_DATAOUT;
 		cmd->i_state = ISTATE_RECEIVED_LAST_DATAOUT;
@@ -2808,11 +2808,11 @@ static int iscsit_send_datain(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
 	/*
 	 * Be paranoid and double check the logic for now.
 	 */
-	if ((datain.offset + datain.length) > cmd->se_cmd.data_length) {
+	if ((datain.offset + datain.length) > cmd->se_cmd.t_iostate.data_length) {
 		pr_err("Command ITT: 0x%08x, datain.offset: %u and"
 			" datain.length: %u exceeds cmd->data_length: %u\n",
 			cmd->init_task_tag, datain.offset, datain.length,
-			cmd->se_cmd.data_length);
+			cmd->se_cmd.t_iostate.data_length);
 		return -1;
 	}
 
@@ -3116,8 +3116,8 @@ int iscsit_build_r2ts_for_cmd(
 					conn->sess->sess_ops->MaxBurstLength -
 					cmd->next_burst_len;
 
-				if (new_data_end > cmd->se_cmd.data_length)
-					xfer_len = cmd->se_cmd.data_length - offset;
+				if (new_data_end > cmd->se_cmd.t_iostate.data_length)
+					xfer_len = cmd->se_cmd.t_iostate.data_length - offset;
 				else
 					xfer_len =
 						conn->sess->sess_ops->MaxBurstLength -
@@ -3126,14 +3126,14 @@ int iscsit_build_r2ts_for_cmd(
 				int new_data_end = offset +
 					conn->sess->sess_ops->MaxBurstLength;
 
-				if (new_data_end > cmd->se_cmd.data_length)
-					xfer_len = cmd->se_cmd.data_length - offset;
+				if (new_data_end > cmd->se_cmd.t_iostate.data_length)
+					xfer_len = cmd->se_cmd.t_iostate.data_length - offset;
 				else
 					xfer_len = conn->sess->sess_ops->MaxBurstLength;
 			}
 			cmd->r2t_offset += xfer_len;
 
-			if (cmd->r2t_offset == cmd->se_cmd.data_length)
+			if (cmd->r2t_offset == cmd->se_cmd.t_iostate.data_length)
 				cmd->cmd_flags |= ICF_SENT_LAST_R2T;
 		} else {
 			struct iscsi_seq *seq;
diff --git a/drivers/target/iscsi/iscsi_target_datain_values.c b/drivers/target/iscsi/iscsi_target_datain_values.c
index 647d4a5..9aff3b8 100644
--- a/drivers/target/iscsi/iscsi_target_datain_values.c
+++ b/drivers/target/iscsi/iscsi_target_datain_values.c
@@ -108,7 +108,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_yes_and_yes(
 	read_data_done = (!dr->recovery) ?
 			cmd->read_data_done : dr->read_data_done;
 
-	read_data_left = (cmd->se_cmd.data_length - read_data_done);
+	read_data_left = (cmd->se_cmd.t_iostate.data_length - read_data_done);
 	if (!read_data_left) {
 		pr_err("ITT: 0x%08x read_data_left is zero!\n",
 				cmd->init_task_tag);
@@ -207,7 +207,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_no_and_yes(
 	seq_send_order = (!dr->recovery) ?
 			cmd->seq_send_order : dr->seq_send_order;
 
-	read_data_left = (cmd->se_cmd.data_length - read_data_done);
+	read_data_left = (cmd->se_cmd.t_iostate.data_length - read_data_done);
 	if (!read_data_left) {
 		pr_err("ITT: 0x%08x read_data_left is zero!\n",
 				cmd->init_task_tag);
@@ -226,8 +226,8 @@ static struct iscsi_datain_req *iscsit_set_datain_values_no_and_yes(
 	offset = (seq->offset + seq->next_burst_len);
 
 	if ((offset + conn->conn_ops->MaxRecvDataSegmentLength) >=
-	     cmd->se_cmd.data_length) {
-		datain->length = (cmd->se_cmd.data_length - offset);
+	     cmd->se_cmd.t_iostate.data_length) {
+		datain->length = (cmd->se_cmd.t_iostate.data_length - offset);
 		datain->offset = offset;
 
 		datain->flags |= ISCSI_FLAG_CMD_FINAL;
@@ -259,7 +259,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_no_and_yes(
 		}
 	}
 
-	if ((read_data_done + datain->length) == cmd->se_cmd.data_length)
+	if ((read_data_done + datain->length) == cmd->se_cmd.t_iostate.data_length)
 		datain->flags |= ISCSI_FLAG_DATA_STATUS;
 
 	datain->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
@@ -328,7 +328,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_yes_and_no(
 	read_data_done = (!dr->recovery) ?
 			cmd->read_data_done : dr->read_data_done;
 
-	read_data_left = (cmd->se_cmd.data_length - read_data_done);
+	read_data_left = (cmd->se_cmd.t_iostate.data_length - read_data_done);
 	if (!read_data_left) {
 		pr_err("ITT: 0x%08x read_data_left is zero!\n",
 				cmd->init_task_tag);
@@ -339,7 +339,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_yes_and_no(
 	if (!pdu)
 		return dr;
 
-	if ((read_data_done + pdu->length) == cmd->se_cmd.data_length) {
+	if ((read_data_done + pdu->length) == cmd->se_cmd.t_iostate.data_length) {
 		pdu->flags |= (ISCSI_FLAG_CMD_FINAL | ISCSI_FLAG_DATA_STATUS);
 		if (conn->sess->sess_ops->ErrorRecoveryLevel > 0)
 			pdu->flags |= ISCSI_FLAG_DATA_ACK;
@@ -428,7 +428,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_no_and_no(
 	seq_send_order = (!dr->recovery) ?
 			cmd->seq_send_order : dr->seq_send_order;
 
-	read_data_left = (cmd->se_cmd.data_length - read_data_done);
+	read_data_left = (cmd->se_cmd.t_iostate.data_length - read_data_done);
 	if (!read_data_left) {
 		pr_err("ITT: 0x%08x read_data_left is zero!\n",
 				cmd->init_task_tag);
@@ -458,7 +458,7 @@ static struct iscsi_datain_req *iscsit_set_datain_values_no_and_no(
 	} else
 		seq->next_burst_len += pdu->length;
 
-	if ((read_data_done + pdu->length) == cmd->se_cmd.data_length)
+	if ((read_data_done + pdu->length) == cmd->se_cmd.t_iostate.data_length)
 		pdu->flags |= ISCSI_FLAG_DATA_STATUS;
 
 	pdu->data_sn = (!dr->recovery) ? cmd->data_sn++ : dr->data_sn++;
diff --git a/drivers/target/iscsi/iscsi_target_erl0.c b/drivers/target/iscsi/iscsi_target_erl0.c
index b54e72c..61fe12a 100644
--- a/drivers/target/iscsi/iscsi_target_erl0.c
+++ b/drivers/target/iscsi/iscsi_target_erl0.c
@@ -45,9 +45,9 @@ void iscsit_set_dataout_sequence_values(
 	if (cmd->unsolicited_data) {
 		cmd->seq_start_offset = cmd->write_data_done;
 		cmd->seq_end_offset = (cmd->write_data_done +
-			((cmd->se_cmd.data_length >
+			((cmd->se_cmd.t_iostate.data_length >
 			  conn->sess->sess_ops->FirstBurstLength) ?
-			 conn->sess->sess_ops->FirstBurstLength : cmd->se_cmd.data_length));
+			 conn->sess->sess_ops->FirstBurstLength : cmd->se_cmd.t_iostate.data_length));
 		return;
 	}
 
@@ -56,15 +56,15 @@ void iscsit_set_dataout_sequence_values(
 
 	if (!cmd->seq_start_offset && !cmd->seq_end_offset) {
 		cmd->seq_start_offset = cmd->write_data_done;
-		cmd->seq_end_offset = (cmd->se_cmd.data_length >
+		cmd->seq_end_offset = (cmd->se_cmd.t_iostate.data_length >
 			conn->sess->sess_ops->MaxBurstLength) ?
 			(cmd->write_data_done +
-			conn->sess->sess_ops->MaxBurstLength) : cmd->se_cmd.data_length;
+			conn->sess->sess_ops->MaxBurstLength) : cmd->se_cmd.t_iostate.data_length;
 	} else {
 		cmd->seq_start_offset = cmd->seq_end_offset;
 		cmd->seq_end_offset = ((cmd->seq_end_offset +
 			conn->sess->sess_ops->MaxBurstLength) >=
-			cmd->se_cmd.data_length) ? cmd->se_cmd.data_length :
+			cmd->se_cmd.t_iostate.data_length) ? cmd->se_cmd.t_iostate.data_length :
 			(cmd->seq_end_offset +
 			 conn->sess->sess_ops->MaxBurstLength);
 	}
@@ -180,13 +180,13 @@ static int iscsit_dataout_check_unsolicited_sequence(
 		if (!conn->sess->sess_ops->DataPDUInOrder)
 			goto out;
 
-		if ((first_burst_len != cmd->se_cmd.data_length) &&
+		if ((first_burst_len != cmd->se_cmd.t_iostate.data_length) &&
 		    (first_burst_len != conn->sess->sess_ops->FirstBurstLength)) {
 			pr_err("Unsolicited non-immediate data"
 			" received %u does not equal FirstBurstLength: %u, and"
 			" does not equal ExpXferLen %u.\n", first_burst_len,
 				conn->sess->sess_ops->FirstBurstLength,
-				cmd->se_cmd.data_length);
+				cmd->se_cmd.t_iostate.data_length);
 			transport_send_check_condition_and_sense(&cmd->se_cmd,
 					TCM_INCORRECT_AMOUNT_OF_DATA, 0);
 			return DATAOUT_CANNOT_RECOVER;
@@ -199,10 +199,10 @@ static int iscsit_dataout_check_unsolicited_sequence(
 				conn->sess->sess_ops->FirstBurstLength);
 			return DATAOUT_CANNOT_RECOVER;
 		}
-		if (first_burst_len == cmd->se_cmd.data_length) {
+		if (first_burst_len == cmd->se_cmd.t_iostate.data_length) {
 			pr_err("Command ITT: 0x%08x reached"
 			" ExpXferLen: %u, but ISCSI_FLAG_CMD_FINAL is not set. protocol"
-			" error.\n", cmd->init_task_tag, cmd->se_cmd.data_length);
+			" error.\n", cmd->init_task_tag, cmd->se_cmd.t_iostate.data_length);
 			return DATAOUT_CANNOT_RECOVER;
 		}
 	}
@@ -293,7 +293,7 @@ static int iscsit_dataout_check_sequence(
 			if ((next_burst_len <
 			     conn->sess->sess_ops->MaxBurstLength) &&
 			   ((cmd->write_data_done + payload_length) <
-			     cmd->se_cmd.data_length)) {
+			     cmd->se_cmd.t_iostate.data_length)) {
 				pr_err("Command ITT: 0x%08x set ISCSI_FLAG_CMD_FINAL"
 				" before end of DataOUT sequence, protocol"
 				" error.\n", cmd->init_task_tag);
@@ -318,7 +318,7 @@ static int iscsit_dataout_check_sequence(
 				return DATAOUT_CANNOT_RECOVER;
 			}
 			if ((cmd->write_data_done + payload_length) ==
-					cmd->se_cmd.data_length) {
+					cmd->se_cmd.t_iostate.data_length) {
 				pr_err("Command ITT: 0x%08x reached"
 				" last DataOUT PDU in sequence but ISCSI_FLAG_"
 				"CMD_FINAL is not set, protocol error.\n",
@@ -640,7 +640,7 @@ static int iscsit_dataout_post_crc_passed(
 
 	cmd->write_data_done += payload_length;
 
-	if (cmd->write_data_done == cmd->se_cmd.data_length)
+	if (cmd->write_data_done == cmd->se_cmd.t_iostate.data_length)
 		return DATAOUT_SEND_TO_TRANSPORT;
 	else if (send_r2t)
 		return DATAOUT_SEND_R2T;
diff --git a/drivers/target/iscsi/iscsi_target_erl1.c b/drivers/target/iscsi/iscsi_target_erl1.c
index 9214c9da..acb57b0 100644
--- a/drivers/target/iscsi/iscsi_target_erl1.c
+++ b/drivers/target/iscsi/iscsi_target_erl1.c
@@ -1115,8 +1115,8 @@ static int iscsit_set_dataout_timeout_values(
 	if (cmd->unsolicited_data) {
 		*offset = 0;
 		*length = (conn->sess->sess_ops->FirstBurstLength >
-			   cmd->se_cmd.data_length) ?
-			   cmd->se_cmd.data_length :
+			   cmd->se_cmd.t_iostate.data_length) ?
+			   cmd->se_cmd.t_iostate.data_length :
 			   conn->sess->sess_ops->FirstBurstLength;
 		return 0;
 	}
@@ -1187,8 +1187,8 @@ static void iscsit_handle_dataout_timeout(unsigned long data)
 		if (conn->sess->sess_ops->DataPDUInOrder) {
 			pdu_offset = cmd->write_data_done;
 			if ((pdu_offset + (conn->sess->sess_ops->MaxBurstLength -
-			     cmd->next_burst_len)) > cmd->se_cmd.data_length)
-				pdu_length = (cmd->se_cmd.data_length -
+			     cmd->next_burst_len)) > cmd->se_cmd.t_iostate.data_length)
+				pdu_length = (cmd->se_cmd.t_iostate.data_length -
 					cmd->write_data_done);
 			else
 				pdu_length = (conn->sess->sess_ops->MaxBurstLength -
diff --git a/drivers/target/iscsi/iscsi_target_seq_pdu_list.c b/drivers/target/iscsi/iscsi_target_seq_pdu_list.c
index e446a09..dcb0530 100644
--- a/drivers/target/iscsi/iscsi_target_seq_pdu_list.c
+++ b/drivers/target/iscsi/iscsi_target_seq_pdu_list.c
@@ -220,7 +220,7 @@ static void iscsit_determine_counts_for_list(
 	u32 mdsl;
 	struct iscsi_conn *conn = cmd->conn;
 
-	if (cmd->se_cmd.data_direction == DMA_TO_DEVICE)
+	if (cmd->se_cmd.t_iostate.data_direction == DMA_TO_DEVICE)
 		mdsl = cmd->conn->conn_ops->MaxXmitDataSegmentLength;
 	else
 		mdsl = cmd->conn->conn_ops->MaxRecvDataSegmentLength;
@@ -231,10 +231,10 @@ static void iscsit_determine_counts_for_list(
 
 	if ((bl->type == PDULIST_UNSOLICITED) ||
 	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
-		unsolicited_data_length = min(cmd->se_cmd.data_length,
+		unsolicited_data_length = min(cmd->se_cmd.t_iostate.data_length,
 			conn->sess->sess_ops->FirstBurstLength);
 
-	while (offset < cmd->se_cmd.data_length) {
+	while (offset < cmd->se_cmd.t_iostate.data_length) {
 		*pdu_count += 1;
 
 		if (check_immediate) {
@@ -247,10 +247,10 @@ static void iscsit_determine_counts_for_list(
 			continue;
 		}
 		if (unsolicited_data_length > 0) {
-			if ((offset + mdsl) >= cmd->se_cmd.data_length) {
+			if ((offset + mdsl) >= cmd->se_cmd.t_iostate.data_length) {
 				unsolicited_data_length -=
-					(cmd->se_cmd.data_length - offset);
-				offset += (cmd->se_cmd.data_length - offset);
+					(cmd->se_cmd.t_iostate.data_length - offset);
+				offset += (cmd->se_cmd.t_iostate.data_length - offset);
 				continue;
 			}
 			if ((offset + mdsl)
@@ -269,8 +269,8 @@ static void iscsit_determine_counts_for_list(
 			unsolicited_data_length -= mdsl;
 			continue;
 		}
-		if ((offset + mdsl) >= cmd->se_cmd.data_length) {
-			offset += (cmd->se_cmd.data_length - offset);
+		if ((offset + mdsl) >= cmd->se_cmd.t_iostate.data_length) {
+			offset += (cmd->se_cmd.t_iostate.data_length - offset);
 			continue;
 		}
 		if ((burstlength + mdsl) >=
@@ -303,7 +303,7 @@ static int iscsit_do_build_pdu_and_seq_lists(
 	struct iscsi_pdu *pdu = cmd->pdu_list;
 	struct iscsi_seq *seq = cmd->seq_list;
 
-	if (cmd->se_cmd.data_direction == DMA_TO_DEVICE)
+	if (cmd->se_cmd.t_iostate.data_direction == DMA_TO_DEVICE)
 		mdsl = cmd->conn->conn_ops->MaxXmitDataSegmentLength;
 	else
 		mdsl = cmd->conn->conn_ops->MaxRecvDataSegmentLength;
@@ -317,10 +317,10 @@ static int iscsit_do_build_pdu_and_seq_lists(
 
 	if ((bl->type == PDULIST_UNSOLICITED) ||
 	    (bl->type == PDULIST_IMMEDIATE_AND_UNSOLICITED))
-		unsolicited_data_length = min(cmd->se_cmd.data_length,
+		unsolicited_data_length = min(cmd->se_cmd.t_iostate.data_length,
 			conn->sess->sess_ops->FirstBurstLength);
 
-	while (offset < cmd->se_cmd.data_length) {
+	while (offset < cmd->se_cmd.t_iostate.data_length) {
 		pdu_count++;
 		if (!datapduinorder) {
 			pdu[i].offset = offset;
@@ -354,21 +354,21 @@ static int iscsit_do_build_pdu_and_seq_lists(
 			continue;
 		}
 		if (unsolicited_data_length > 0) {
-			if ((offset + mdsl) >= cmd->se_cmd.data_length) {
+			if ((offset + mdsl) >= cmd->se_cmd.t_iostate.data_length) {
 				if (!datapduinorder) {
 					pdu[i].type = PDUTYPE_UNSOLICITED;
 					pdu[i].length =
-						(cmd->se_cmd.data_length - offset);
+						(cmd->se_cmd.t_iostate.data_length - offset);
 				}
 				if (!datasequenceinorder) {
 					seq[seq_no].type = SEQTYPE_UNSOLICITED;
 					seq[seq_no].pdu_count = pdu_count;
 					seq[seq_no].xfer_len = (burstlength +
-						(cmd->se_cmd.data_length - offset));
+						(cmd->se_cmd.t_iostate.data_length - offset));
 				}
 				unsolicited_data_length -=
-						(cmd->se_cmd.data_length - offset);
-				offset += (cmd->se_cmd.data_length - offset);
+						(cmd->se_cmd.t_iostate.data_length - offset);
+				offset += (cmd->se_cmd.t_iostate.data_length - offset);
 				continue;
 			}
 			if ((offset + mdsl) >=
@@ -406,18 +406,18 @@ static int iscsit_do_build_pdu_and_seq_lists(
 			unsolicited_data_length -= mdsl;
 			continue;
 		}
-		if ((offset + mdsl) >= cmd->se_cmd.data_length) {
+		if ((offset + mdsl) >= cmd->se_cmd.t_iostate.data_length) {
 			if (!datapduinorder) {
 				pdu[i].type = PDUTYPE_NORMAL;
-				pdu[i].length = (cmd->se_cmd.data_length - offset);
+				pdu[i].length = (cmd->se_cmd.t_iostate.data_length - offset);
 			}
 			if (!datasequenceinorder) {
 				seq[seq_no].type = SEQTYPE_NORMAL;
 				seq[seq_no].pdu_count = pdu_count;
 				seq[seq_no].xfer_len = (burstlength +
-					(cmd->se_cmd.data_length - offset));
+					(cmd->se_cmd.t_iostate.data_length - offset));
 			}
-			offset += (cmd->se_cmd.data_length - offset);
+			offset += (cmd->se_cmd.t_iostate.data_length - offset);
 			continue;
 		}
 		if ((burstlength + mdsl) >=
diff --git a/drivers/target/iscsi/iscsi_target_tmr.c b/drivers/target/iscsi/iscsi_target_tmr.c
index 3d63705..5379b99 100644
--- a/drivers/target/iscsi/iscsi_target_tmr.c
+++ b/drivers/target/iscsi/iscsi_target_tmr.c
@@ -280,9 +280,9 @@ static int iscsit_task_reassign_complete_write(
 		offset = cmd->next_burst_len = cmd->write_data_done;
 
 		if ((conn->sess->sess_ops->FirstBurstLength - offset) >=
-		     cmd->se_cmd.data_length) {
+		     cmd->se_cmd.t_iostate.data_length) {
 			no_build_r2ts = 1;
-			length = (cmd->se_cmd.data_length - offset);
+			length = (cmd->se_cmd.t_iostate.data_length - offset);
 		} else
 			length = (conn->sess->sess_ops->FirstBurstLength - offset);
 
diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
index 1f38177..7403242 100644
--- a/drivers/target/iscsi/iscsi_target_util.c
+++ b/drivers/target/iscsi/iscsi_target_util.c
@@ -355,14 +355,14 @@ int iscsit_check_unsolicited_dataout(struct iscsi_cmd *cmd, unsigned char *buf)
 	if (!(hdr->flags & ISCSI_FLAG_CMD_FINAL))
 		return 0;
 
-	if (((cmd->first_burst_len + payload_length) != cmd->se_cmd.data_length) &&
+	if (((cmd->first_burst_len + payload_length) != cmd->se_cmd.t_iostate.data_length) &&
 	    ((cmd->first_burst_len + payload_length) !=
 	      conn->sess->sess_ops->FirstBurstLength)) {
 		pr_err("Unsolicited non-immediate data received %u"
 			" does not equal FirstBurstLength: %u, and does"
 			" not equal ExpXferLen %u.\n",
 			(cmd->first_burst_len + payload_length),
-			conn->sess->sess_ops->FirstBurstLength, cmd->se_cmd.data_length);
+			conn->sess->sess_ops->FirstBurstLength, cmd->se_cmd.t_iostate.data_length);
 		transport_send_check_condition_and_sense(se_cmd,
 				TCM_INCORRECT_AMOUNT_OF_DATA, 0);
 		return -1;
diff --git a/drivers/target/loopback/tcm_loop.c b/drivers/target/loopback/tcm_loop.c
index 5091b31..a1e7ab3 100644
--- a/drivers/target/loopback/tcm_loop.c
+++ b/drivers/target/loopback/tcm_loop.c
@@ -152,7 +152,7 @@ static void tcm_loop_submission_work(struct work_struct *work)
 	transfer_length = scsi_transfer_length(sc);
 	if (!scsi_prot_sg_count(sc) &&
 	    scsi_get_prot_op(sc) != SCSI_PROT_NORMAL) {
-		se_cmd->prot_pto = true;
+		se_cmd->t_iostate.prot_pto = true;
 		/*
 		 * loopback transport doesn't support
 		 * WRITE_GENERATE, READ_STRIP protection
diff --git a/drivers/target/sbp/sbp_target.c b/drivers/target/sbp/sbp_target.c
index dcc6eba..366e112 100644
--- a/drivers/target/sbp/sbp_target.c
+++ b/drivers/target/sbp/sbp_target.c
@@ -1262,7 +1262,7 @@ static int sbp_rw_data(struct sbp_target_request *req)
 	struct fw_card *card;
 	struct sg_mapping_iter iter;
 
-	if (req->se_cmd.data_direction == DMA_FROM_DEVICE) {
+	if (req->se_cmd.t_iostate.data_direction == DMA_FROM_DEVICE) {
 		tcode = TCODE_WRITE_BLOCK_REQUEST;
 		sg_miter_flags = SG_MITER_FROM_SG;
 	} else {
@@ -1296,7 +1296,7 @@ static int sbp_rw_data(struct sbp_target_request *req)
 		num_pte = 0;
 
 		offset = sbp2_pointer_to_addr(&req->orb.data_descriptor);
-		length = req->se_cmd.data_length;
+		length = req->se_cmd.t_iostate.data_length;
 	}
 
 	sg_miter_start(&iter, req->se_cmd.t_iomem.t_data_sg,
diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index 4c82bbe..c806a96 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -71,9 +71,9 @@ target_emulate_report_referrals(struct se_cmd *cmd)
 	unsigned char *buf;
 	u32 rd_len = 0, off;
 
-	if (cmd->data_length < 4) {
+	if (cmd->t_iostate.data_length < 4) {
 		pr_warn("REPORT REFERRALS allocation length %u too"
-			" small\n", cmd->data_length);
+			" small\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
@@ -96,10 +96,10 @@ target_emulate_report_referrals(struct se_cmd *cmd)
 		int pg_num;
 
 		off += 4;
-		if (cmd->data_length > off)
+		if (cmd->t_iostate.data_length > off)
 			put_unaligned_be64(map->lba_map_first_lba, &buf[off]);
 		off += 8;
-		if (cmd->data_length > off)
+		if (cmd->t_iostate.data_length > off)
 			put_unaligned_be64(map->lba_map_last_lba, &buf[off]);
 		off += 8;
 		rd_len += 20;
@@ -109,19 +109,19 @@ target_emulate_report_referrals(struct se_cmd *cmd)
 			int alua_state = map_mem->lba_map_mem_alua_state;
 			int alua_pg_id = map_mem->lba_map_mem_alua_pg_id;
 
-			if (cmd->data_length > off)
+			if (cmd->t_iostate.data_length > off)
 				buf[off] = alua_state & 0x0f;
 			off += 2;
-			if (cmd->data_length > off)
+			if (cmd->t_iostate.data_length > off)
 				buf[off] = (alua_pg_id >> 8) & 0xff;
 			off++;
-			if (cmd->data_length > off)
+			if (cmd->t_iostate.data_length > off)
 				buf[off] = (alua_pg_id & 0xff);
 			off++;
 			rd_len += 4;
 			pg_num++;
 		}
-		if (cmd->data_length > desc_num)
+		if (cmd->t_iostate.data_length > desc_num)
 			buf[desc_num] = pg_num;
 	}
 	spin_unlock(&dev->t10_alua.lba_map_lock);
@@ -161,9 +161,9 @@ target_emulate_report_target_port_groups(struct se_cmd *cmd)
 	else
 		off = 4;
 
-	if (cmd->data_length < off) {
+	if (cmd->t_iostate.data_length < off) {
 		pr_warn("REPORT TARGET PORT GROUPS allocation length %u too"
-			" small for %s header\n", cmd->data_length,
+			" small for %s header\n", cmd->t_iostate.data_length,
 			(ext_hdr) ? "extended" : "normal");
 		return TCM_INVALID_CDB_FIELD;
 	}
@@ -181,7 +181,7 @@ target_emulate_report_target_port_groups(struct se_cmd *cmd)
 		 * the allocation length and the response is truncated.
 		 */
 		if ((off + 8 + (tg_pt_gp->tg_pt_gp_members * 4)) >
-		     cmd->data_length) {
+		     cmd->t_iostate.data_length) {
 			rd_len += 8 + (tg_pt_gp->tg_pt_gp_members * 4);
 			continue;
 		}
@@ -289,9 +289,9 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
 	int alua_access_state, primary = 0, valid_states;
 	u16 tg_pt_id, rtpi;
 
-	if (cmd->data_length < 4) {
+	if (cmd->t_iostate.data_length < 4) {
 		pr_warn("SET TARGET PORT GROUPS parameter list length %u too"
-			" small\n", cmd->data_length);
+			" small\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_PARAMETER_LIST;
 	}
 
@@ -324,7 +324,7 @@ target_emulate_set_target_port_groups(struct se_cmd *cmd)
 
 	ptr = &buf[4]; /* Skip over RESERVED area in header */
 
-	while (len < cmd->data_length) {
+	while (len < cmd->t_iostate.data_length) {
 		bool found = false;
 		alua_access_state = (ptr[0] & 0x0f);
 		/*
@@ -483,10 +483,10 @@ static inline int core_alua_state_lba_dependent(
 	spin_lock(&dev->t10_alua.lba_map_lock);
 	segment_size = dev->t10_alua.lba_map_segment_size;
 	segment_mult = dev->t10_alua.lba_map_segment_multiplier;
-	sectors = cmd->data_length / dev->dev_attrib.block_size;
+	sectors = cmd->t_iostate.data_length / dev->dev_attrib.block_size;
 
-	lba = cmd->t_task_lba;
-	while (lba < cmd->t_task_lba + sectors) {
+	lba = cmd->t_iostate.t_task_lba;
+	while (lba < cmd->t_iostate.t_task_lba + sectors) {
 		struct t10_alua_lba_map *cur_map = NULL, *map;
 		struct t10_alua_lba_map_member *map_mem;
 
diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
index a4046ca..910c990 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -69,11 +69,11 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd, u64 unpacked_lun)
 	if (deve) {
 		atomic_long_inc(&deve->total_cmds);
 
-		if (se_cmd->data_direction == DMA_TO_DEVICE)
-			atomic_long_add(se_cmd->data_length,
+		if (se_cmd->t_iostate.data_direction == DMA_TO_DEVICE)
+			atomic_long_add(se_cmd->t_iostate.data_length,
 					&deve->write_bytes);
-		else if (se_cmd->data_direction == DMA_FROM_DEVICE)
-			atomic_long_add(se_cmd->data_length,
+		else if (se_cmd->t_iostate.data_direction == DMA_FROM_DEVICE)
+			atomic_long_add(se_cmd->t_iostate.data_length,
 					&deve->read_bytes);
 
 		se_lun = rcu_dereference(deve->se_lun);
@@ -85,7 +85,7 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd, u64 unpacked_lun)
 		percpu_ref_get(&se_lun->lun_ref);
 		se_cmd->lun_ref_active = true;
 
-		if ((se_cmd->data_direction == DMA_TO_DEVICE) &&
+		if ((se_cmd->t_iostate.data_direction == DMA_TO_DEVICE) &&
 		    deve->lun_access_ro) {
 			pr_err("TARGET_CORE[%s]: Detected WRITE_PROTECTED LUN"
 				" Access for 0x%08llx\n",
@@ -123,8 +123,8 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd, u64 unpacked_lun)
 		/*
 		 * Force WRITE PROTECT for virtual LUN 0
 		 */
-		if ((se_cmd->data_direction != DMA_FROM_DEVICE) &&
-		    (se_cmd->data_direction != DMA_NONE)) {
+		if ((se_cmd->t_iostate.data_direction != DMA_FROM_DEVICE) &&
+		    (se_cmd->t_iostate.data_direction != DMA_NONE)) {
 			ret = TCM_WRITE_PROTECTED;
 			goto ref_dev;
 		}
@@ -139,11 +139,11 @@ ref_dev:
 	se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev);
 	atomic_long_inc(&se_cmd->se_dev->num_cmds);
 
-	if (se_cmd->data_direction == DMA_TO_DEVICE)
-		atomic_long_add(se_cmd->data_length,
+	if (se_cmd->t_iostate.data_direction == DMA_TO_DEVICE)
+		atomic_long_add(se_cmd->t_iostate.data_length,
 				&se_cmd->se_dev->write_bytes);
-	else if (se_cmd->data_direction == DMA_FROM_DEVICE)
-		atomic_long_add(se_cmd->data_length,
+	else if (se_cmd->t_iostate.data_direction == DMA_FROM_DEVICE)
+		atomic_long_add(se_cmd->t_iostate.data_length,
 				&se_cmd->se_dev->read_bytes);
 
 	return ret;
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 033c6a8..c865e45 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -254,7 +254,7 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
 	struct iov_iter iter;
 	struct bio_vec *bvec;
 	ssize_t len = 0;
-	loff_t pos = (cmd->t_task_lba * block_size);
+	loff_t pos = (cmd->t_iostate.t_task_lba * block_size);
 	int ret = 0, i;
 
 	bvec = kcalloc(sgl_nents, sizeof(struct bio_vec), GFP_KERNEL);
@@ -327,13 +327,13 @@ fd_execute_sync_cache(struct se_cmd *cmd)
 	/*
 	 * Determine if we will be flushing the entire device.
 	 */
-	if (cmd->t_task_lba == 0 && cmd->data_length == 0) {
+	if (cmd->t_iostate.t_task_lba == 0 && cmd->t_iostate.data_length == 0) {
 		start = 0;
 		end = LLONG_MAX;
 	} else {
-		start = cmd->t_task_lba * dev->dev_attrib.block_size;
-		if (cmd->data_length)
-			end = start + cmd->data_length - 1;
+		start = cmd->t_iostate.t_task_lba * dev->dev_attrib.block_size;
+		if (cmd->t_iostate.data_length)
+			end = start + cmd->t_iostate.data_length - 1;
 		else
 			end = LLONG_MAX;
 	}
@@ -358,7 +358,7 @@ fd_execute_write_same(struct se_cmd *cmd)
 {
 	struct se_device *se_dev = cmd->se_dev;
 	struct fd_dev *fd_dev = FD_DEV(se_dev);
-	loff_t pos = cmd->t_task_lba * se_dev->dev_attrib.block_size;
+	loff_t pos = cmd->t_iostate.t_task_lba * se_dev->dev_attrib.block_size;
 	sector_t nolb = sbc_get_write_same_sectors(cmd);
 	struct iov_iter iter;
 	struct bio_vec *bvec;
@@ -369,7 +369,7 @@ fd_execute_write_same(struct se_cmd *cmd)
 		target_complete_cmd(cmd, SAM_STAT_GOOD);
 		return 0;
 	}
-	if (cmd->prot_op) {
+	if (cmd->t_iostate.prot_op) {
 		pr_err("WRITE_SAME: Protection information with FILEIO"
 		       " backends not supported\n");
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -521,10 +521,10 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	 * We are currently limited by the number of iovecs (2048) per
 	 * single vfs_[writev,readv] call.
 	 */
-	if (cmd->data_length > FD_MAX_BYTES) {
+	if (cmd->t_iostate.data_length > FD_MAX_BYTES) {
 		pr_err("FILEIO: Not able to process I/O of %u bytes due to"
 		       "FD_MAX_BYTES: %u iovec count limitiation\n",
-			cmd->data_length, FD_MAX_BYTES);
+			cmd->t_iostate.data_length, FD_MAX_BYTES);
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 	/*
@@ -532,63 +532,63 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	 * physical memory addresses to struct iovec virtual memory.
 	 */
 	if (data_direction == DMA_FROM_DEVICE) {
-		if (cmd->prot_type && dev->dev_attrib.pi_prot_type) {
+		if (cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
 			ret = fd_do_rw(cmd, pfile, dev->prot_length,
 				       cmd->t_iomem.t_prot_sg,
 				       cmd->t_iomem.t_prot_nents,
-				       cmd->prot_length, 0);
+				       cmd->t_iostate.prot_length, 0);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 		}
 
 		ret = fd_do_rw(cmd, file, dev->dev_attrib.block_size,
-			       sgl, sgl_nents, cmd->data_length, 0);
+			       sgl, sgl_nents, cmd->t_iostate.data_length, 0);
 
-		if (ret > 0 && cmd->prot_type && dev->dev_attrib.pi_prot_type) {
-			u32 sectors = cmd->data_length >>
+		if (ret > 0 && cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
+			u32 sectors = cmd->t_iostate.data_length >>
 					ilog2(dev->dev_attrib.block_size);
 
-			rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors,
+			rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors,
 					    0, cmd->t_iomem.t_prot_sg, 0);
 			if (rc)
 				return rc;
 		}
 	} else {
-		if (cmd->prot_type && dev->dev_attrib.pi_prot_type) {
-			u32 sectors = cmd->data_length >>
+		if (cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
+			u32 sectors = cmd->t_iostate.data_length >>
 					ilog2(dev->dev_attrib.block_size);
 
-			rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors,
+			rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors,
 					    0, cmd->t_iomem.t_prot_sg, 0);
 			if (rc)
 				return rc;
 		}
 
 		ret = fd_do_rw(cmd, file, dev->dev_attrib.block_size,
-			       sgl, sgl_nents, cmd->data_length, 1);
+			       sgl, sgl_nents, cmd->t_iostate.data_length, 1);
 		/*
 		 * Perform implicit vfs_fsync_range() for fd_do_writev() ops
 		 * for SCSI WRITEs with Forced Unit Access (FUA) set.
 		 * Allow this to happen independent of WCE=0 setting.
 		 */
 		if (ret > 0 && (cmd->se_cmd_flags & SCF_FUA)) {
-			loff_t start = cmd->t_task_lba *
+			loff_t start = cmd->t_iostate.t_task_lba *
 				dev->dev_attrib.block_size;
 			loff_t end;
 
-			if (cmd->data_length)
-				end = start + cmd->data_length - 1;
+			if (cmd->t_iostate.data_length)
+				end = start + cmd->t_iostate.data_length - 1;
 			else
 				end = LLONG_MAX;
 
 			vfs_fsync_range(fd_dev->fd_file, start, end, 1);
 		}
 
-		if (ret > 0 && cmd->prot_type && dev->dev_attrib.pi_prot_type) {
+		if (ret > 0 && cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
 			ret = fd_do_rw(cmd, pfile, dev->prot_length,
 				       cmd->t_iomem.t_prot_sg,
 				       cmd->t_iomem.t_prot_nents,
-				       cmd->prot_length, 1);
+				       cmd->t_iostate.prot_length, 1);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 		}
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 80ad456..0f45973 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -430,7 +430,8 @@ iblock_execute_write_same_direct(struct block_device *bdev, struct se_cmd *cmd)
 	}
 
 	ret = blkdev_issue_write_same(bdev,
-				target_to_linux_sector(dev, cmd->t_task_lba),
+				target_to_linux_sector(dev,
+					cmd->t_iostate.t_task_lba),
 				target_to_linux_sector(dev,
 					sbc_get_write_same_sectors(cmd)),
 				GFP_KERNEL, page ? page : sg_page(sg));
@@ -452,11 +453,11 @@ iblock_execute_write_same(struct se_cmd *cmd)
 	struct bio *bio;
 	struct bio_list list;
 	struct se_device *dev = cmd->se_dev;
-	sector_t block_lba = target_to_linux_sector(dev, cmd->t_task_lba);
+	sector_t block_lba = target_to_linux_sector(dev, cmd->t_iostate.t_task_lba);
 	sector_t sectors = target_to_linux_sector(dev,
 					sbc_get_write_same_sectors(cmd));
 
-	if (cmd->prot_op) {
+	if (cmd->t_iostate.prot_op) {
 		pr_err("WRITE_SAME: Protection information with IBLOCK"
 		       " backends not supported\n");
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -643,7 +644,7 @@ iblock_alloc_bip(struct se_cmd *cmd, struct bio *bio)
 		return PTR_ERR(bip);
 	}
 
-	bip->bip_iter.bi_size = (cmd->data_length / dev->dev_attrib.block_size) *
+	bip->bip_iter.bi_size = (cmd->t_iostate.data_length / dev->dev_attrib.block_size) *
 			 dev->prot_length;
 	bip->bip_iter.bi_sector = bio->bi_iter.bi_sector;
 
@@ -671,7 +672,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 		  enum dma_data_direction data_direction)
 {
 	struct se_device *dev = cmd->se_dev;
-	sector_t block_lba = target_to_linux_sector(dev, cmd->t_task_lba);
+	sector_t block_lba = target_to_linux_sector(dev, cmd->t_iostate.t_task_lba);
 	struct iblock_req *ibr;
 	struct bio *bio, *bio_start;
 	struct bio_list list;
@@ -751,7 +752,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 		sg_num--;
 	}
 
-	if (cmd->prot_type && dev->dev_attrib.pi_prot_type) {
+	if (cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
 		int rc = iblock_alloc_bip(cmd, bio_start);
 		if (rc)
 			goto fail_put_bios;
diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
index 47463c9..f91058f 100644
--- a/drivers/target/target_core_pr.c
+++ b/drivers/target/target_core_pr.c
@@ -497,7 +497,7 @@ static int core_scsi3_pr_seq_non_holder(struct se_cmd *cmd, u32 pr_reg_type,
 	 * WRITE_EXCLUSIVE_* reservation.
 	 */
 	if (we && !registered_nexus) {
-		if (cmd->data_direction == DMA_TO_DEVICE) {
+		if (cmd->t_iostate.data_direction == DMA_TO_DEVICE) {
 			/*
 			 * Conflict for write exclusive
 			 */
@@ -545,7 +545,7 @@ static int core_scsi3_pr_seq_non_holder(struct se_cmd *cmd, u32 pr_reg_type,
                 * Reads are allowed for Write Exclusive locks
                 * from all registrants.
                 */
-               if (cmd->data_direction == DMA_FROM_DEVICE) {
+               if (cmd->t_iostate.data_direction == DMA_FROM_DEVICE) {
                        pr_debug("Allowing READ CDB: 0x%02x for %s"
                                " reservation\n", cdb[0],
                                core_scsi3_pr_dump_type(pr_reg_type));
@@ -1543,9 +1543,9 @@ core_scsi3_decode_spec_i_port(
 	tidh_new->dest_se_deve = NULL;
 	list_add_tail(&tidh_new->dest_list, &tid_dest_list);
 
-	if (cmd->data_length < 28) {
+	if (cmd->t_iostate.data_length < 28) {
 		pr_warn("SPC-PR: Received PR OUT parameter list"
-			" length too small: %u\n", cmd->data_length);
+			" length too small: %u\n", cmd->t_iostate.data_length);
 		ret = TCM_INVALID_PARAMETER_LIST;
 		goto out;
 	}
@@ -1566,10 +1566,10 @@ core_scsi3_decode_spec_i_port(
 	tpdl |= (buf[26] & 0xff) << 8;
 	tpdl |= buf[27] & 0xff;
 
-	if ((tpdl + 28) != cmd->data_length) {
+	if ((tpdl + 28) != cmd->t_iostate.data_length) {
 		pr_err("SPC-3 PR: Illegal tpdl: %u + 28 byte header"
-			" does not equal CDB data_length: %u\n", tpdl,
-			cmd->data_length);
+			" does not equal CDB t_iostate.data_length: %u\n", tpdl,
+			cmd->t_iostate.data_length);
 		ret = TCM_INVALID_PARAMETER_LIST;
 		goto out_unmap;
 	}
@@ -1658,9 +1658,9 @@ core_scsi3_decode_spec_i_port(
 			goto out_unmap;
 		}
 
-		pr_debug("SPC-3 PR SPEC_I_PT: Got %s data_length: %u tpdl: %u"
+		pr_debug("SPC-3 PR SPEC_I_PT: Got %s t_iostate.data_length: %u tpdl: %u"
 			" tid_len: %d for %s + %s\n",
-			dest_tpg->se_tpg_tfo->get_fabric_name(), cmd->data_length,
+			dest_tpg->se_tpg_tfo->get_fabric_name(), cmd->t_iostate.data_length,
 			tpdl, tid_len, i_str, iport_ptr);
 
 		if (tid_len > tpdl) {
@@ -3229,10 +3229,10 @@ core_scsi3_emulate_pro_register_and_move(struct se_cmd *cmd, u64 res_key,
 	transport_kunmap_data_sg(cmd);
 	buf = NULL;
 
-	if ((tid_len + 24) != cmd->data_length) {
+	if ((tid_len + 24) != cmd->t_iostate.data_length) {
 		pr_err("SPC-3 PR: Illegal tid_len: %u + 24 byte header"
-			" does not equal CDB data_length: %u\n", tid_len,
-			cmd->data_length);
+			" does not equal CDB t_iostate.data_length: %u\n", tid_len,
+			cmd->t_iostate.data_length);
 		ret = TCM_INVALID_PARAMETER_LIST;
 		goto out_put_pr_reg;
 	}
@@ -3598,9 +3598,9 @@ target_scsi3_emulate_pr_out(struct se_cmd *cmd)
 	if (!cmd->se_sess)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
-	if (cmd->data_length < 24) {
+	if (cmd->t_iostate.data_length < 24) {
 		pr_warn("SPC-PR: Received PR OUT parameter list"
-			" length too small: %u\n", cmd->data_length);
+			" length too small: %u\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_PARAMETER_LIST;
 	}
 
@@ -3658,9 +3658,9 @@ target_scsi3_emulate_pr_out(struct se_cmd *cmd)
 	 * code set to PARAMETER LIST LENGTH ERROR.
 	 */
 	if (!spec_i_pt && ((cdb[1] & 0x1f) != PRO_REGISTER_AND_MOVE) &&
-	    (cmd->data_length != 24)) {
+	    (cmd->t_iostate.data_length != 24)) {
 		pr_warn("SPC-PR: Received PR OUT illegal parameter"
-			" list length: %u\n", cmd->data_length);
+			" list length: %u\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_PARAMETER_LIST;
 	}
 
@@ -3723,9 +3723,9 @@ core_scsi3_pri_read_keys(struct se_cmd *cmd)
 	unsigned char *buf;
 	u32 add_len = 0, off = 8;
 
-	if (cmd->data_length < 8) {
+	if (cmd->t_iostate.data_length < 8) {
 		pr_err("PRIN SA READ_KEYS SCSI Data Length: %u"
-			" too small\n", cmd->data_length);
+			" too small\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
@@ -3745,7 +3745,7 @@ core_scsi3_pri_read_keys(struct se_cmd *cmd)
 		 * Check for overflow of 8byte PRI READ_KEYS payload and
 		 * next reservation key list descriptor.
 		 */
-		if ((add_len + 8) > (cmd->data_length - 8))
+		if ((add_len + 8) > (cmd->t_iostate.data_length - 8))
 			break;
 
 		buf[off++] = ((pr_reg->pr_res_key >> 56) & 0xff);
@@ -3785,9 +3785,9 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
 	u64 pr_res_key;
 	u32 add_len = 16; /* Hardcoded to 16 when a reservation is held. */
 
-	if (cmd->data_length < 8) {
+	if (cmd->t_iostate.data_length < 8) {
 		pr_err("PRIN SA READ_RESERVATIONS SCSI Data Length: %u"
-			" too small\n", cmd->data_length);
+			" too small\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
@@ -3811,7 +3811,7 @@ core_scsi3_pri_read_reservation(struct se_cmd *cmd)
 		buf[6] = ((add_len >> 8) & 0xff);
 		buf[7] = (add_len & 0xff);
 
-		if (cmd->data_length < 22)
+		if (cmd->t_iostate.data_length < 22)
 			goto err;
 
 		/*
@@ -3871,9 +3871,9 @@ core_scsi3_pri_report_capabilities(struct se_cmd *cmd)
 	unsigned char *buf;
 	u16 add_len = 8; /* Hardcoded to 8. */
 
-	if (cmd->data_length < 6) {
+	if (cmd->t_iostate.data_length < 6) {
 		pr_err("PRIN SA REPORT_CAPABILITIES SCSI Data Length:"
-			" %u too small\n", cmd->data_length);
+			" %u too small\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
@@ -3936,9 +3936,9 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd)
 	int exp_desc_len, desc_len;
 	bool all_reg = false;
 
-	if (cmd->data_length < 8) {
+	if (cmd->t_iostate.data_length < 8) {
 		pr_err("PRIN SA READ_FULL_STATUS SCSI Data Length: %u"
-			" too small\n", cmd->data_length);
+			" too small\n", cmd->t_iostate.data_length);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
@@ -3981,9 +3981,9 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd)
 		exp_desc_len = target_get_pr_transport_id_len(se_nacl, pr_reg,
 					&format_code);
 		if (exp_desc_len < 0 ||
-		    exp_desc_len + add_len > cmd->data_length) {
+		    exp_desc_len + add_len > cmd->t_iostate.data_length) {
 			pr_warn("SPC-3 PRIN READ_FULL_STATUS ran"
-				" out of buffer: %d\n", cmd->data_length);
+				" out of buffer: %d\n", cmd->t_iostate.data_length);
 			spin_lock(&pr_tmpl->registration_lock);
 			atomic_dec_mb(&pr_reg->pr_res_holders);
 			break;
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 75041dd..105894a 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -640,7 +640,7 @@ static void pscsi_transport_complete(struct se_cmd *cmd, struct scatterlist *sg,
 	 * Hack to make sure that Write-Protect modepage is set if R/O mode is
 	 * forced.
 	 */
-	if (!cmd->data_length)
+	if (!cmd->t_iostate.data_length)
 		goto after_mode_sense;
 
 	if (((cdb[0] == MODE_SENSE) || (cdb[0] == MODE_SENSE_10)) &&
@@ -667,7 +667,7 @@ static void pscsi_transport_complete(struct se_cmd *cmd, struct scatterlist *sg,
 	}
 after_mode_sense:
 
-	if (sd->type != TYPE_TAPE || !cmd->data_length)
+	if (sd->type != TYPE_TAPE || !cmd->t_iostate.data_length)
 		goto after_mode_select;
 
 	/*
@@ -882,8 +882,8 @@ pscsi_map_sg(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	struct bio *bio = NULL, *tbio = NULL;
 	struct page *page;
 	struct scatterlist *sg;
-	u32 data_len = cmd->data_length, i, len, bytes, off;
-	int nr_pages = (cmd->data_length + sgl[0].offset +
+	u32 data_len = cmd->t_iostate.data_length, i, len, bytes, off;
+	int nr_pages = (cmd->t_iostate.data_length + sgl[0].offset +
 			PAGE_SIZE - 1) >> PAGE_SHIFT;
 	int nr_vecs = 0, rc;
 	int rw = (data_direction == DMA_TO_DEVICE);
@@ -992,7 +992,7 @@ pscsi_execute_cmd(struct se_cmd *cmd)
 {
 	struct scatterlist *sgl = cmd->t_iomem.t_data_sg;
 	u32 sgl_nents = cmd->t_iomem.t_data_nents;
-	enum dma_data_direction data_direction = cmd->data_direction;
+	enum dma_data_direction data_direction = cmd->t_iostate.data_direction;
 	struct pscsi_dev_virt *pdv = PSCSI_DEV(cmd->se_dev);
 	struct pscsi_plugin_task *pt;
 	struct request *req;
@@ -1024,7 +1024,7 @@ pscsi_execute_cmd(struct se_cmd *cmd)
 
 		blk_rq_set_block_pc(req);
 	} else {
-		BUG_ON(!cmd->data_length);
+		BUG_ON(!cmd->t_iostate.data_length);
 
 		ret = pscsi_map_sg(cmd, sgl, sgl_nents, data_direction, &hbio);
 		if (ret)
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index d840281..4edd3e0 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -404,13 +404,13 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
 	struct rd_dev *dev = RD_DEV(se_dev);
 	struct rd_dev_sg_table *prot_table;
 	struct scatterlist *prot_sg;
-	u32 sectors = cmd->data_length / se_dev->dev_attrib.block_size;
+	u32 sectors = cmd->t_iostate.data_length / se_dev->dev_attrib.block_size;
 	u32 prot_offset, prot_page;
 	u32 prot_npages __maybe_unused;
 	u64 tmp;
 	sense_reason_t rc = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
-	tmp = cmd->t_task_lba * se_dev->prot_length;
+	tmp = cmd->t_iostate.t_task_lba * se_dev->prot_length;
 	prot_offset = do_div(tmp, PAGE_SIZE);
 	prot_page = tmp;
 
@@ -422,10 +422,10 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
 					prot_table->page_start_offset];
 
 	if (is_read)
-		rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors, 0,
+		rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors, 0,
 				    prot_sg, prot_offset);
 	else
-		rc = sbc_dif_verify(cmd, cmd->t_task_lba, sectors, 0,
+		rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors, 0,
 				    cmd->t_iomem.t_prot_sg, 0);
 
 	if (!rc)
@@ -455,10 +455,10 @@ rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 		return 0;
 	}
 
-	tmp = cmd->t_task_lba * se_dev->dev_attrib.block_size;
+	tmp = cmd->t_iostate.t_task_lba * se_dev->dev_attrib.block_size;
 	rd_offset = do_div(tmp, PAGE_SIZE);
 	rd_page = tmp;
-	rd_size = cmd->data_length;
+	rd_size = cmd->t_iostate.data_length;
 
 	table = rd_get_sg_table(dev, rd_page);
 	if (!table)
@@ -469,9 +469,9 @@ rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	pr_debug("RD[%u]: %s LBA: %llu, Size: %u Page: %u, Offset: %u\n",
 			dev->rd_dev_id,
 			data_direction == DMA_FROM_DEVICE ? "Read" : "Write",
-			cmd->t_task_lba, rd_size, rd_page, rd_offset);
+			cmd->t_iostate.t_task_lba, rd_size, rd_page, rd_offset);
 
-	if (cmd->prot_type && se_dev->dev_attrib.pi_prot_type &&
+	if (cmd->t_iostate.prot_type && se_dev->dev_attrib.pi_prot_type &&
 	    data_direction == DMA_TO_DEVICE) {
 		rc = rd_do_prot_rw(cmd, false);
 		if (rc)
@@ -539,7 +539,7 @@ rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	}
 	sg_miter_stop(&m);
 
-	if (cmd->prot_type && se_dev->dev_attrib.pi_prot_type &&
+	if (cmd->t_iostate.prot_type && se_dev->dev_attrib.pi_prot_type &&
 	    data_direction == DMA_FROM_DEVICE) {
 		rc = rd_do_prot_rw(cmd, true);
 		if (rc)
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index c80a225..744ef71 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -81,7 +81,7 @@ sbc_emulate_readcapacity(struct se_cmd *cmd)
 
 	rbuf = transport_kmap_data_sg(cmd);
 	if (rbuf) {
-		memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+		memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->t_iostate.data_length));
 		transport_kunmap_data_sg(cmd);
 	}
 
@@ -154,7 +154,7 @@ sbc_emulate_readcapacity_16(struct se_cmd *cmd)
 
 	rbuf = transport_kmap_data_sg(cmd);
 	if (rbuf) {
-		memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+		memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->t_iostate.data_length));
 		transport_kunmap_data_sg(cmd);
 	}
 
@@ -213,7 +213,7 @@ sector_t sbc_get_write_same_sectors(struct se_cmd *cmd)
 		return num_blocks;
 
 	return cmd->se_dev->transport->get_blocks(cmd->se_dev) -
-		cmd->t_task_lba + 1;
+		cmd->t_iostate.t_task_lba + 1;
 }
 EXPORT_SYMBOL(sbc_get_write_same_sectors);
 
@@ -225,7 +225,7 @@ sbc_execute_write_same_unmap(struct se_cmd *cmd)
 	sense_reason_t ret;
 
 	if (nolb) {
-		ret = ops->execute_unmap(cmd, cmd->t_task_lba, nolb);
+		ret = ops->execute_unmap(cmd, cmd->t_iostate.t_task_lba, nolb);
 		if (ret)
 			return ret;
 	}
@@ -340,10 +340,10 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
 	/*
 	 * Sanity check for LBA wrap and request past end of device.
 	 */
-	if (((cmd->t_task_lba + sectors) < cmd->t_task_lba) ||
-	    ((cmd->t_task_lba + sectors) > end_lba)) {
+	if (((cmd->t_iostate.t_task_lba + sectors) < cmd->t_iostate.t_task_lba) ||
+	    ((cmd->t_iostate.t_task_lba + sectors) > end_lba)) {
 		pr_err("WRITE_SAME exceeds last lba %llu (lba %llu, sectors %u)\n",
-		       (unsigned long long)end_lba, cmd->t_task_lba, sectors);
+		       (unsigned long long)end_lba, cmd->t_iostate.t_task_lba, sectors);
 		return TCM_ADDRESS_OUT_OF_RANGE;
 	}
 
@@ -398,7 +398,7 @@ static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,
 	 *    blocks transferred from the data-out buffer; and
 	 * 5) transfer the resulting XOR data to the data-in buffer.
 	 */
-	buf = kmalloc(cmd->data_length, GFP_KERNEL);
+	buf = kmalloc(cmd->t_iostate.data_length, GFP_KERNEL);
 	if (!buf) {
 		pr_err("Unable to allocate xor_callback buf\n");
 		return TCM_OUT_OF_RESOURCES;
@@ -410,7 +410,7 @@ static sense_reason_t xdreadwrite_callback(struct se_cmd *cmd, bool success,
 	sg_copy_to_buffer(cmd->t_iomem.t_data_sg,
 			  cmd->t_iomem.t_data_nents,
 			  buf,
-			  cmd->data_length);
+			  cmd->t_iostate.data_length);
 
 	/*
 	 * Now perform the XOR against the BIDI read memory located at
@@ -444,7 +444,7 @@ sbc_execute_rw(struct se_cmd *cmd)
 	struct sbc_ops *ops = cmd->protocol_data;
 
 	return ops->execute_rw(cmd, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
-			       cmd->data_direction);
+			       cmd->t_iostate.data_direction);
 }
 
 static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success,
@@ -481,7 +481,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	unsigned char *buf = NULL, *addr;
 	struct sg_mapping_iter m;
 	unsigned int offset = 0, len;
-	unsigned int nlbas = cmd->t_task_nolb;
+	unsigned int nlbas = cmd->t_iostate.t_task_nolb;
 	unsigned int block_size = dev->dev_attrib.block_size;
 	unsigned int compare_len = (nlbas * block_size);
 	sense_reason_t ret = TCM_NO_SENSE;
@@ -496,7 +496,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	/*
 	 * Handle special case for zero-length COMPARE_AND_WRITE
 	 */
-	if (!cmd->data_length)
+	if (!cmd->t_iostate.data_length)
 		goto out;
 	/*
 	 * Immediately exit + release dev->caw_sem if command has already
@@ -508,7 +508,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 		goto out;
 	}
 
-	buf = kzalloc(cmd->data_length, GFP_KERNEL);
+	buf = kzalloc(cmd->t_iostate.data_length, GFP_KERNEL);
 	if (!buf) {
 		pr_err("Unable to allocate compare_and_write buf\n");
 		ret = TCM_OUT_OF_RESOURCES;
@@ -527,7 +527,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	 * Setup verify and write data payloads from total NumberLBAs.
 	 */
 	rc = sg_copy_to_buffer(cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
-			       buf, cmd->data_length);
+			       buf, cmd->t_iostate.data_length);
 	if (!rc) {
 		pr_err("sg_copy_to_buffer() failed for compare_and_write\n");
 		ret = TCM_OUT_OF_RESOURCES;
@@ -561,7 +561,7 @@ static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool succes
 	}
 
 	i = 0;
-	len = cmd->t_task_nolb * block_size;
+	len = cmd->t_iostate.t_task_nolb * block_size;
 	sg_miter_start(&m, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
 		       SG_MITER_TO_SG);
 	/*
@@ -642,11 +642,12 @@ sbc_compare_and_write(struct se_cmd *cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 	/*
-	 * Reset cmd->data_length to individual block_size in order to not
+	 * Reset cmd->t_iostate.data_length to individual block_size in order to not
 	 * confuse backend drivers that depend on this value matching the
 	 * size of the I/O being submitted.
 	 */
-	cmd->data_length = cmd->t_task_nolb * dev->dev_attrib.block_size;
+	cmd->t_iostate.data_length = cmd->t_iostate.t_task_nolb *
+				     dev->dev_attrib.block_size;
 
 	ret = ops->execute_rw(cmd, cmd->t_iomem.t_bidi_data_sg,
 			      cmd->t_iomem.t_bidi_data_nents, DMA_FROM_DEVICE);
@@ -668,52 +669,52 @@ sbc_set_prot_op_checks(u8 protect, bool fabric_prot, enum target_prot_type prot_
 		       bool is_write, struct se_cmd *cmd)
 {
 	if (is_write) {
-		cmd->prot_op = fabric_prot ? TARGET_PROT_DOUT_STRIP :
+		cmd->t_iostate.prot_op = fabric_prot ? TARGET_PROT_DOUT_STRIP :
 			       protect ? TARGET_PROT_DOUT_PASS :
 			       TARGET_PROT_DOUT_INSERT;
 		switch (protect) {
 		case 0x0:
 		case 0x3:
-			cmd->prot_checks = 0;
+			cmd->t_iostate.prot_checks = 0;
 			break;
 		case 0x1:
 		case 0x5:
-			cmd->prot_checks = TARGET_DIF_CHECK_GUARD;
+			cmd->t_iostate.prot_checks = TARGET_DIF_CHECK_GUARD;
 			if (prot_type == TARGET_DIF_TYPE1_PROT)
-				cmd->prot_checks |= TARGET_DIF_CHECK_REFTAG;
+				cmd->t_iostate.prot_checks |= TARGET_DIF_CHECK_REFTAG;
 			break;
 		case 0x2:
 			if (prot_type == TARGET_DIF_TYPE1_PROT)
-				cmd->prot_checks = TARGET_DIF_CHECK_REFTAG;
+				cmd->t_iostate.prot_checks = TARGET_DIF_CHECK_REFTAG;
 			break;
 		case 0x4:
-			cmd->prot_checks = TARGET_DIF_CHECK_GUARD;
+			cmd->t_iostate.prot_checks = TARGET_DIF_CHECK_GUARD;
 			break;
 		default:
 			pr_err("Unsupported protect field %d\n", protect);
 			return -EINVAL;
 		}
 	} else {
-		cmd->prot_op = fabric_prot ? TARGET_PROT_DIN_INSERT :
+		cmd->t_iostate.prot_op = fabric_prot ? TARGET_PROT_DIN_INSERT :
 			       protect ? TARGET_PROT_DIN_PASS :
 			       TARGET_PROT_DIN_STRIP;
 		switch (protect) {
 		case 0x0:
 		case 0x1:
 		case 0x5:
-			cmd->prot_checks = TARGET_DIF_CHECK_GUARD;
+			cmd->t_iostate.prot_checks = TARGET_DIF_CHECK_GUARD;
 			if (prot_type == TARGET_DIF_TYPE1_PROT)
-				cmd->prot_checks |= TARGET_DIF_CHECK_REFTAG;
+				cmd->t_iostate.prot_checks |= TARGET_DIF_CHECK_REFTAG;
 			break;
 		case 0x2:
 			if (prot_type == TARGET_DIF_TYPE1_PROT)
-				cmd->prot_checks = TARGET_DIF_CHECK_REFTAG;
+				cmd->t_iostate.prot_checks = TARGET_DIF_CHECK_REFTAG;
 			break;
 		case 0x3:
-			cmd->prot_checks = 0;
+			cmd->t_iostate.prot_checks = 0;
 			break;
 		case 0x4:
-			cmd->prot_checks = TARGET_DIF_CHECK_GUARD;
+			cmd->t_iostate.prot_checks = TARGET_DIF_CHECK_GUARD;
 			break;
 		default:
 			pr_err("Unsupported protect field %d\n", protect);
@@ -740,22 +741,22 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
 			       " not advertise PROTECT=1 feature bit\n");
 			return TCM_INVALID_CDB_FIELD;
 		}
-		if (cmd->prot_pto)
+		if (cmd->t_iostate.prot_pto)
 			return TCM_NO_SENSE;
 	}
 
 	switch (dev->dev_attrib.pi_prot_type) {
 	case TARGET_DIF_TYPE3_PROT:
-		cmd->reftag_seed = 0xffffffff;
+		cmd->t_iostate.reftag_seed = 0xffffffff;
 		break;
 	case TARGET_DIF_TYPE2_PROT:
 		if (protect)
 			return TCM_INVALID_CDB_FIELD;
 
-		cmd->reftag_seed = cmd->t_task_lba;
+		cmd->t_iostate.reftag_seed = cmd->t_iostate.t_task_lba;
 		break;
 	case TARGET_DIF_TYPE1_PROT:
-		cmd->reftag_seed = cmd->t_task_lba;
+		cmd->t_iostate.reftag_seed = cmd->t_iostate.t_task_lba;
 		break;
 	case TARGET_DIF_TYPE0_PROT:
 		/*
@@ -783,8 +784,8 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
 	if (sbc_set_prot_op_checks(protect, fabric_prot, pi_prot_type, is_write, cmd))
 		return TCM_INVALID_CDB_FIELD;
 
-	cmd->prot_type = pi_prot_type;
-	cmd->prot_length = dev->prot_length * sectors;
+	cmd->t_iostate.prot_type = pi_prot_type;
+	cmd->t_iostate.prot_length = dev->prot_length * sectors;
 
 	/**
 	 * In case protection information exists over the wire
@@ -793,12 +794,13 @@ sbc_check_prot(struct se_device *dev, struct se_cmd *cmd, unsigned char *cdb,
 	 * length
 	 **/
 	if (protect)
-		cmd->data_length = sectors * dev->dev_attrib.block_size;
+		cmd->t_iostate.data_length = sectors * dev->dev_attrib.block_size;
 
-	pr_debug("%s: prot_type=%d, data_length=%d, prot_length=%d "
-		 "prot_op=%d prot_checks=%d\n",
-		 __func__, cmd->prot_type, cmd->data_length, cmd->prot_length,
-		 cmd->prot_op, cmd->prot_checks);
+	pr_debug("%s: prot_type=%d, t_iostate.data_length=%d, prot_length=%d "
+		 "prot_op=%d t_iostate.prot_checks=%d\n",
+		 __func__, cmd->t_iostate.prot_type, cmd->t_iostate.data_length,
+		 cmd->t_iostate.prot_length, cmd->t_iostate.prot_op,
+		 cmd->t_iostate.prot_checks);
 
 	return TCM_NO_SENSE;
 }
@@ -840,13 +842,13 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 	switch (cdb[0]) {
 	case READ_6:
 		sectors = transport_get_sectors_6(cdb);
-		cmd->t_task_lba = transport_lba_21(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_21(cdb);
 		cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
 		cmd->execute_cmd = sbc_execute_rw;
 		break;
 	case READ_10:
 		sectors = transport_get_sectors_10(cdb);
-		cmd->t_task_lba = transport_lba_32(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
@@ -860,7 +862,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		break;
 	case READ_12:
 		sectors = transport_get_sectors_12(cdb);
-		cmd->t_task_lba = transport_lba_32(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
@@ -874,7 +876,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		break;
 	case READ_16:
 		sectors = transport_get_sectors_16(cdb);
-		cmd->t_task_lba = transport_lba_64(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_64(cdb);
 
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
@@ -888,14 +890,14 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		break;
 	case WRITE_6:
 		sectors = transport_get_sectors_6(cdb);
-		cmd->t_task_lba = transport_lba_21(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_21(cdb);
 		cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
 		cmd->execute_cmd = sbc_execute_rw;
 		break;
 	case WRITE_10:
 	case WRITE_VERIFY:
 		sectors = transport_get_sectors_10(cdb);
-		cmd->t_task_lba = transport_lba_32(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
@@ -909,7 +911,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		break;
 	case WRITE_12:
 		sectors = transport_get_sectors_12(cdb);
-		cmd->t_task_lba = transport_lba_32(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
@@ -923,7 +925,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		break;
 	case WRITE_16:
 		sectors = transport_get_sectors_16(cdb);
-		cmd->t_task_lba = transport_lba_64(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_64(cdb);
 
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
@@ -936,7 +938,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		cmd->execute_cmd = sbc_execute_rw;
 		break;
 	case XDWRITEREAD_10:
-		if (cmd->data_direction != DMA_TO_DEVICE ||
+		if (cmd->t_iostate.data_direction != DMA_TO_DEVICE ||
 		    !(cmd->se_cmd_flags & SCF_BIDI))
 			return TCM_INVALID_CDB_FIELD;
 		sectors = transport_get_sectors_10(cdb);
@@ -944,7 +946,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		if (sbc_check_dpofua(dev, cmd, cdb))
 			return TCM_INVALID_CDB_FIELD;
 
-		cmd->t_task_lba = transport_lba_32(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 		cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
 
 		/*
@@ -966,7 +968,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 			 * Use WRITE_32 and READ_32 opcodes for the emulated
 			 * XDWRITE_READ_32 logic.
 			 */
-			cmd->t_task_lba = transport_lba_64_ext(cdb);
+			cmd->t_iostate.t_task_lba = transport_lba_64_ext(cdb);
 			cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
 
 			/*
@@ -985,7 +987,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 			}
 
 			size = sbc_get_size(cmd, 1);
-			cmd->t_task_lba = get_unaligned_be64(&cdb[12]);
+			cmd->t_iostate.t_task_lba = get_unaligned_be64(&cdb[12]);
 
 			ret = sbc_setup_write_same(cmd, &cdb[10], ops);
 			if (ret)
@@ -1016,8 +1018,8 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		 * zero is not an error..
 		 */
 		size = 2 * sbc_get_size(cmd, sectors);
-		cmd->t_task_lba = get_unaligned_be64(&cdb[2]);
-		cmd->t_task_nolb = sectors;
+		cmd->t_iostate.t_task_lba = get_unaligned_be64(&cdb[2]);
+		cmd->t_iostate.t_task_nolb = sectors;
 		cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB | SCF_COMPARE_AND_WRITE;
 		cmd->execute_cmd = sbc_compare_and_write;
 		cmd->transport_complete_callback = compare_and_write_callback;
@@ -1046,10 +1048,10 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 	case SYNCHRONIZE_CACHE_16:
 		if (cdb[0] == SYNCHRONIZE_CACHE) {
 			sectors = transport_get_sectors_10(cdb);
-			cmd->t_task_lba = transport_lba_32(cdb);
+			cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 		} else {
 			sectors = transport_get_sectors_16(cdb);
-			cmd->t_task_lba = transport_lba_64(cdb);
+			cmd->t_iostate.t_task_lba = transport_lba_64(cdb);
 		}
 		if (ops->execute_sync_cache) {
 			cmd->execute_cmd = ops->execute_sync_cache;
@@ -1078,7 +1080,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		}
 
 		size = sbc_get_size(cmd, 1);
-		cmd->t_task_lba = get_unaligned_be64(&cdb[2]);
+		cmd->t_iostate.t_task_lba = get_unaligned_be64(&cdb[2]);
 
 		ret = sbc_setup_write_same(cmd, &cdb[1], ops);
 		if (ret)
@@ -1092,7 +1094,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		}
 
 		size = sbc_get_size(cmd, 1);
-		cmd->t_task_lba = get_unaligned_be32(&cdb[2]);
+		cmd->t_iostate.t_task_lba = get_unaligned_be32(&cdb[2]);
 
 		/*
 		 * Follow sbcr26 with WRITE_SAME (10) and check for the existence
@@ -1105,7 +1107,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 	case VERIFY:
 		size = 0;
 		sectors = transport_get_sectors_10(cdb);
-		cmd->t_task_lba = transport_lba_32(cdb);
+		cmd->t_iostate.t_task_lba = transport_lba_32(cdb);
 		cmd->execute_cmd = sbc_emulate_noop;
 		goto check_lba;
 	case REZERO_UNIT:
@@ -1138,11 +1140,11 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 		unsigned long long end_lba;
 check_lba:
 		end_lba = dev->transport->get_blocks(dev) + 1;
-		if (((cmd->t_task_lba + sectors) < cmd->t_task_lba) ||
-		    ((cmd->t_task_lba + sectors) > end_lba)) {
+		if (((cmd->t_iostate.t_task_lba + sectors) < cmd->t_iostate.t_task_lba) ||
+		    ((cmd->t_iostate.t_task_lba + sectors) > end_lba)) {
 			pr_err("cmd exceeds last lba %llu "
 				"(lba %llu, sectors %u)\n",
-				end_lba, cmd->t_task_lba, sectors);
+				end_lba, cmd->t_iostate.t_task_lba, sectors);
 			return TCM_ADDRESS_OUT_OF_RANGE;
 		}
 
@@ -1176,14 +1178,14 @@ sbc_execute_unmap(struct se_cmd *cmd)
 	if (cmd->t_task_cdb[1])
 		return TCM_INVALID_CDB_FIELD;
 
-	if (cmd->data_length == 0) {
+	if (cmd->t_iostate.data_length == 0) {
 		target_complete_cmd(cmd, SAM_STAT_GOOD);
 		return 0;
 	}
 
-	if (cmd->data_length < 8) {
+	if (cmd->t_iostate.data_length < 8) {
 		pr_warn("UNMAP parameter list length %u too small\n",
-			cmd->data_length);
+			cmd->t_iostate.data_length);
 		return TCM_PARAMETER_LIST_LENGTH_ERROR;
 	}
 
@@ -1194,10 +1196,10 @@ sbc_execute_unmap(struct se_cmd *cmd)
 	dl = get_unaligned_be16(&buf[0]);
 	bd_dl = get_unaligned_be16(&buf[2]);
 
-	size = cmd->data_length - 8;
+	size = cmd->t_iostate.data_length - 8;
 	if (bd_dl > size)
 		pr_warn("UNMAP parameter list length %u too small, ignoring bd_dl %u\n",
-			cmd->data_length, bd_dl);
+			cmd->t_iostate.data_length, bd_dl);
 	else
 		size = bd_dl;
 
@@ -1248,7 +1250,7 @@ sbc_dif_generate(struct se_cmd *cmd)
 	struct se_device *dev = cmd->se_dev;
 	struct t10_pi_tuple *sdt;
 	struct scatterlist *dsg = cmd->t_iomem.t_data_sg, *psg;
-	sector_t sector = cmd->t_task_lba;
+	sector_t sector = cmd->t_iostate.t_task_lba;
 	void *daddr, *paddr;
 	int i, j, offset = 0;
 	unsigned int block_size = dev->dev_attrib.block_size;
@@ -1291,13 +1293,13 @@ sbc_dif_generate(struct se_cmd *cmd)
 			}
 
 			sdt->guard_tag = cpu_to_be16(crc);
-			if (cmd->prot_type == TARGET_DIF_TYPE1_PROT)
+			if (cmd->t_iostate.prot_type == TARGET_DIF_TYPE1_PROT)
 				sdt->ref_tag = cpu_to_be32(sector & 0xffffffff);
 			sdt->app_tag = 0;
 
 			pr_debug("DIF %s INSERT sector: %llu guard_tag: 0x%04x"
 				 " app_tag: 0x%04x ref_tag: %u\n",
-				 (cmd->data_direction == DMA_TO_DEVICE) ?
+				 (cmd->t_iostate.data_direction == DMA_TO_DEVICE) ?
 				 "WRITE" : "READ", (unsigned long long)sector,
 				 sdt->guard_tag, sdt->app_tag,
 				 be32_to_cpu(sdt->ref_tag));
@@ -1316,7 +1318,7 @@ sbc_dif_v1_verify(struct se_cmd *cmd, struct t10_pi_tuple *sdt,
 {
 	__be16 csum;
 
-	if (!(cmd->prot_checks & TARGET_DIF_CHECK_GUARD))
+	if (!(cmd->t_iostate.prot_checks & TARGET_DIF_CHECK_GUARD))
 		goto check_ref;
 
 	csum = cpu_to_be16(crc);
@@ -1329,10 +1331,10 @@ sbc_dif_v1_verify(struct se_cmd *cmd, struct t10_pi_tuple *sdt,
 	}
 
 check_ref:
-	if (!(cmd->prot_checks & TARGET_DIF_CHECK_REFTAG))
+	if (!(cmd->t_iostate.prot_checks & TARGET_DIF_CHECK_REFTAG))
 		return 0;
 
-	if (cmd->prot_type == TARGET_DIF_TYPE1_PROT &&
+	if (cmd->t_iostate.prot_type == TARGET_DIF_TYPE1_PROT &&
 	    be32_to_cpu(sdt->ref_tag) != (sector & 0xffffffff)) {
 		pr_err("DIFv1 Type 1 reference failed on sector: %llu tag: 0x%08x"
 		       " sector MSB: 0x%08x\n", (unsigned long long)sector,
@@ -1340,7 +1342,7 @@ check_ref:
 		return TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED;
 	}
 
-	if (cmd->prot_type == TARGET_DIF_TYPE2_PROT &&
+	if (cmd->t_iostate.prot_type == TARGET_DIF_TYPE2_PROT &&
 	    be32_to_cpu(sdt->ref_tag) != ei_lba) {
 		pr_err("DIFv1 Type 2 reference failed on sector: %llu tag: 0x%08x"
 		       " ei_lba: 0x%08x\n", (unsigned long long)sector,
@@ -1463,7 +1465,7 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 			if (rc) {
 				kunmap_atomic(daddr - dsg->offset);
 				kunmap_atomic(paddr - psg->offset);
-				cmd->bad_sector = sector;
+				cmd->t_iostate.bad_sector = sector;
 				return rc;
 			}
 next:
diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index 2a91ed3..2364de7 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -752,7 +752,7 @@ spc_emulate_inquiry(struct se_cmd *cmd)
 out:
 	rbuf = transport_kmap_data_sg(cmd);
 	if (rbuf) {
-		memcpy(rbuf, buf, min_t(u32, SE_INQUIRY_BUF, cmd->data_length));
+		memcpy(rbuf, buf, min_t(u32, SE_INQUIRY_BUF, cmd->t_iostate.data_length));
 		transport_kunmap_data_sg(cmd);
 	}
 	kfree(buf);
@@ -1099,7 +1099,7 @@ set_length:
 
 	rbuf = transport_kmap_data_sg(cmd);
 	if (rbuf) {
-		memcpy(rbuf, buf, min_t(u32, SE_MODE_PAGE_BUF, cmd->data_length));
+		memcpy(rbuf, buf, min_t(u32, SE_MODE_PAGE_BUF, cmd->t_iostate.data_length));
 		transport_kunmap_data_sg(cmd);
 	}
 
@@ -1120,12 +1120,12 @@ static sense_reason_t spc_emulate_modeselect(struct se_cmd *cmd)
 	sense_reason_t ret = 0;
 	int i;
 
-	if (!cmd->data_length) {
+	if (!cmd->t_iostate.data_length) {
 		target_complete_cmd(cmd, GOOD);
 		return 0;
 	}
 
-	if (cmd->data_length < off + 2)
+	if (cmd->t_iostate.data_length < off + 2)
 		return TCM_PARAMETER_LIST_LENGTH_ERROR;
 
 	buf = transport_kmap_data_sg(cmd);
@@ -1152,7 +1152,7 @@ static sense_reason_t spc_emulate_modeselect(struct se_cmd *cmd)
 	goto out;
 
 check_contents:
-	if (cmd->data_length < off + length) {
+	if (cmd->t_iostate.data_length < off + length) {
 		ret = TCM_PARAMETER_LIST_LENGTH_ERROR;
 		goto out;
 	}
@@ -1194,7 +1194,7 @@ static sense_reason_t spc_emulate_request_sense(struct se_cmd *cmd)
 	else
 		scsi_build_sense_buffer(desc_format, buf, NO_SENSE, 0x0, 0x0);
 
-	memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+	memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->t_iostate.data_length));
 	transport_kunmap_data_sg(cmd);
 
 	target_complete_cmd(cmd, GOOD);
@@ -1212,7 +1212,7 @@ sense_reason_t spc_emulate_report_luns(struct se_cmd *cmd)
 	__be32 len;
 
 	buf = transport_kmap_data_sg(cmd);
-	if (cmd->data_length && !buf)
+	if (cmd->t_iostate.data_length && !buf)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
 	/*
@@ -1233,12 +1233,12 @@ sense_reason_t spc_emulate_report_luns(struct se_cmd *cmd)
 		 * See SPC2-R20 7.19.
 		 */
 		lun_count++;
-		if (offset >= cmd->data_length)
+		if (offset >= cmd->t_iostate.data_length)
 			continue;
 
 		int_to_scsilun(deve->mapped_lun, &slun);
 		memcpy(buf + offset, &slun,
-		       min(8u, cmd->data_length - offset));
+		       min(8u, cmd->t_iostate.data_length - offset));
 		offset += 8;
 	}
 	rcu_read_unlock();
@@ -1252,15 +1252,15 @@ done:
 	 */
 	if (lun_count == 0) {
 		int_to_scsilun(0, &slun);
-		if (cmd->data_length > 8)
+		if (cmd->t_iostate.data_length > 8)
 			memcpy(buf + offset, &slun,
-			       min(8u, cmd->data_length - offset));
+			       min(8u, cmd->t_iostate.data_length - offset));
 		lun_count = 1;
 	}
 
 	if (buf) {
 		len = cpu_to_be32(lun_count * 8);
-		memcpy(buf, &len, min_t(int, sizeof len, cmd->data_length));
+		memcpy(buf, &len, min_t(int, sizeof len, cmd->t_iostate.data_length));
 		transport_kunmap_data_sg(cmd);
 	}
 
@@ -1316,7 +1316,7 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
 		if (cdb[0] == RELEASE_10)
 			*size = (cdb[7] << 8) | cdb[8];
 		else
-			*size = cmd->data_length;
+			*size = cmd->t_iostate.data_length;
 
 		cmd->execute_cmd = target_scsi2_reservation_release;
 		break;
@@ -1329,7 +1329,7 @@ spc_parse_cdb(struct se_cmd *cmd, unsigned int *size)
 		if (cdb[0] == RESERVE_10)
 			*size = (cdb[7] << 8) | cdb[8];
 		else
-			*size = cmd->data_length;
+			*size = cmd->t_iostate.data_length;
 
 		cmd->execute_cmd = target_scsi2_reservation_reserve;
 		break;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index e1e7c49..18661da 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -754,15 +754,15 @@ EXPORT_SYMBOL(target_complete_cmd);
 
 void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
 {
-	if (scsi_status == SAM_STAT_GOOD && length < cmd->data_length) {
+	if (scsi_status == SAM_STAT_GOOD && length < cmd->t_iostate.data_length) {
 		if (cmd->se_cmd_flags & SCF_UNDERFLOW_BIT) {
-			cmd->residual_count += cmd->data_length - length;
+			cmd->residual_count += cmd->t_iostate.data_length - length;
 		} else {
 			cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT;
-			cmd->residual_count = cmd->data_length - length;
+			cmd->residual_count = cmd->t_iostate.data_length - length;
 		}
 
-		cmd->data_length = length;
+		cmd->t_iostate.data_length = length;
 	}
 
 	target_complete_cmd(cmd, scsi_status);
@@ -818,7 +818,7 @@ void target_qf_do_work(struct work_struct *work)
 
 unsigned char *transport_dump_cmd_direction(struct se_cmd *cmd)
 {
-	switch (cmd->data_direction) {
+	switch (cmd->t_iostate.data_direction) {
 	case DMA_NONE:
 		return "NONE";
 	case DMA_FROM_DEVICE:
@@ -1118,21 +1118,21 @@ target_check_max_data_sg_nents(struct se_cmd *cmd, struct se_device *dev,
 		return TCM_NO_SENSE;
 	/*
 	 * Check if fabric enforced maximum SGL entries per I/O descriptor
-	 * exceeds se_cmd->data_length.  If true, set SCF_UNDERFLOW_BIT +
-	 * residual_count and reduce original cmd->data_length to maximum
+	 * exceeds se_cmd->t_iostate.data_length.  If true, set SCF_UNDERFLOW_BIT +
+	 * residual_count and reduce original cmd->t_iostate.data_length to maximum
 	 * length based on single PAGE_SIZE entry scatter-lists.
 	 */
 	mtl = (cmd->se_tfo->max_data_sg_nents * PAGE_SIZE);
-	if (cmd->data_length > mtl) {
+	if (cmd->t_iostate.data_length > mtl) {
 		/*
 		 * If an existing CDB overflow is present, calculate new residual
 		 * based on CDB size minus fabric maximum transfer length.
 		 *
 		 * If an existing CDB underflow is present, calculate new residual
-		 * based on original cmd->data_length minus fabric maximum transfer
+		 * based on original cmd->t_iostate.data_length minus fabric maximum transfer
 		 * length.
 		 *
-		 * Otherwise, set the underflow residual based on cmd->data_length
+		 * Otherwise, set the underflow residual based on cmd->t_iostate.data_length
 		 * minus fabric maximum transfer length.
 		 */
 		if (cmd->se_cmd_flags & SCF_OVERFLOW_BIT) {
@@ -1142,16 +1142,16 @@ target_check_max_data_sg_nents(struct se_cmd *cmd, struct se_device *dev,
 			cmd->residual_count = (orig_dl - mtl);
 		} else {
 			cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT;
-			cmd->residual_count = (cmd->data_length - mtl);
+			cmd->residual_count = (cmd->t_iostate.data_length - mtl);
 		}
-		cmd->data_length = mtl;
+		cmd->t_iostate.data_length = mtl;
 		/*
 		 * Reset sbc_check_prot() calculated protection payload
 		 * length based upon the new smaller MTL.
 		 */
-		if (cmd->prot_length) {
+		if (cmd->t_iostate.prot_length) {
 			u32 sectors = (mtl / dev->dev_attrib.block_size);
-			cmd->prot_length = dev->prot_length * sectors;
+			cmd->t_iostate.prot_length = dev->prot_length * sectors;
 		}
 	}
 	return TCM_NO_SENSE;
@@ -1163,14 +1163,14 @@ target_cmd_size_check(struct se_cmd *cmd, unsigned int size)
 	struct se_device *dev = cmd->se_dev;
 
 	if (cmd->unknown_data_length) {
-		cmd->data_length = size;
-	} else if (size != cmd->data_length) {
+		cmd->t_iostate.data_length = size;
+	} else if (size != cmd->t_iostate.data_length) {
 		pr_warn("TARGET_CORE[%s]: Expected Transfer Length:"
 			" %u does not match SCSI CDB Length: %u for SAM Opcode:"
 			" 0x%02x\n", cmd->se_tfo->get_fabric_name(),
-				cmd->data_length, size, cmd->t_task_cdb[0]);
+				cmd->t_iostate.data_length, size, cmd->t_task_cdb[0]);
 
-		if (cmd->data_direction == DMA_TO_DEVICE &&
+		if (cmd->t_iostate.data_direction == DMA_TO_DEVICE &&
 		    cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) {
 			pr_err("Rejecting underflow/overflow WRITE data\n");
 			return TCM_INVALID_CDB_FIELD;
@@ -1188,17 +1188,17 @@ target_cmd_size_check(struct se_cmd *cmd, unsigned int size)
 		}
 		/*
 		 * For the overflow case keep the existing fabric provided
-		 * ->data_length.  Otherwise for the underflow case, reset
-		 * ->data_length to the smaller SCSI expected data transfer
+		 * ->t_iostate.data_length.  Otherwise for the underflow case, reset
+		 * ->t_iostate.data_length to the smaller SCSI expected data transfer
 		 * length.
 		 */
-		if (size > cmd->data_length) {
+		if (size > cmd->t_iostate.data_length) {
 			cmd->se_cmd_flags |= SCF_OVERFLOW_BIT;
-			cmd->residual_count = (size - cmd->data_length);
+			cmd->residual_count = (size - cmd->t_iostate.data_length);
 		} else {
 			cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT;
-			cmd->residual_count = (cmd->data_length - size);
-			cmd->data_length = size;
+			cmd->residual_count = (cmd->t_iostate.data_length - size);
+			cmd->t_iostate.data_length = size;
 		}
 	}
 
@@ -1233,8 +1233,8 @@ void transport_init_se_cmd(
 
 	cmd->se_tfo = tfo;
 	cmd->se_sess = se_sess;
-	cmd->data_length = data_length;
-	cmd->data_direction = data_direction;
+	cmd->t_iostate.data_length = data_length;
+	cmd->t_iostate.data_direction = data_direction;
 	cmd->sam_task_attr = task_attr;
 	cmd->sense_buffer = sense_buffer;
 
@@ -1418,7 +1418,7 @@ transport_generic_map_mem_to_cmd(struct se_cmd *cmd, struct scatterlist *sgl,
  * @cdb: pointer to SCSI CDB
  * @sense: pointer to SCSI sense buffer
  * @unpacked_lun: unpacked LUN to reference for struct se_lun
- * @data_length: fabric expected data transfer length
+ * @t_iostate.data_length: fabric expected data transfer length
  * @task_addr: SAM task attribute
  * @data_dir: DMA data direction
  * @flags: flags for command submission from target_sc_flags_tables
@@ -1525,7 +1525,7 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
 		 * -> transport_generic_cmd_sequencer().
 		 */
 		if (!(se_cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) &&
-		     se_cmd->data_direction == DMA_FROM_DEVICE) {
+		     se_cmd->t_iostate.data_direction == DMA_FROM_DEVICE) {
 			unsigned char *buf = NULL;
 
 			if (sgl)
@@ -1564,7 +1564,7 @@ EXPORT_SYMBOL(target_submit_cmd_map_sgls);
  * @cdb: pointer to SCSI CDB
  * @sense: pointer to SCSI sense buffer
  * @unpacked_lun: unpacked LUN to reference for struct se_lun
- * @data_length: fabric expected data transfer length
+ * @t_iostate.data_length: fabric expected data transfer length
  * @task_addr: SAM task attribute
  * @data_dir: DMA data direction
  * @flags: flags for command submission from target_sc_flags_tables
@@ -1810,7 +1810,7 @@ static int target_write_prot_action(struct se_cmd *cmd)
 	 * device has PI enabled, if the transport has not already generated
 	 * PI using hardware WRITE_INSERT offload.
 	 */
-	switch (cmd->prot_op) {
+	switch (cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DOUT_INSERT:
 		if (!(cmd->se_sess->sup_prot_ops & TARGET_PROT_DOUT_INSERT))
 			sbc_dif_generate(cmd);
@@ -1819,8 +1819,8 @@ static int target_write_prot_action(struct se_cmd *cmd)
 		if (cmd->se_sess->sup_prot_ops & TARGET_PROT_DOUT_STRIP)
 			break;
 
-		sectors = cmd->data_length >> ilog2(cmd->se_dev->dev_attrib.block_size);
-		cmd->pi_err = sbc_dif_verify(cmd, cmd->t_task_lba,
+		sectors = cmd->t_iostate.data_length >> ilog2(cmd->se_dev->dev_attrib.block_size);
+		cmd->pi_err = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba,
 					     sectors, 0, cmd->t_iomem.t_prot_sg, 0);
 		if (unlikely(cmd->pi_err)) {
 			spin_lock_irq(&cmd->t_state_lock);
@@ -1998,7 +1998,7 @@ static void transport_complete_qf(struct se_cmd *cmd)
 		goto out;
 	}
 
-	switch (cmd->data_direction) {
+	switch (cmd->t_iostate.data_direction) {
 	case DMA_FROM_DEVICE:
 		if (cmd->scsi_status)
 			goto queue_status;
@@ -2044,13 +2044,13 @@ static void transport_handle_queue_full(
 
 static bool target_read_prot_action(struct se_cmd *cmd)
 {
-	switch (cmd->prot_op) {
+	switch (cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DIN_STRIP:
 		if (!(cmd->se_sess->sup_prot_ops & TARGET_PROT_DIN_STRIP)) {
-			u32 sectors = cmd->data_length >>
+			u32 sectors = cmd->t_iostate.data_length >>
 				  ilog2(cmd->se_dev->dev_attrib.block_size);
 
-			cmd->pi_err = sbc_dif_verify(cmd, cmd->t_task_lba,
+			cmd->pi_err = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba,
 						     sectors, 0,
 						     cmd->t_iomem.t_prot_sg, 0);
 			if (cmd->pi_err)
@@ -2111,7 +2111,7 @@ static void target_complete_ok_work(struct work_struct *work)
 	if (cmd->transport_complete_callback) {
 		sense_reason_t rc;
 		bool caw = (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE);
-		bool zero_dl = !(cmd->data_length);
+		bool zero_dl = !(cmd->t_iostate.data_length);
 		int post_ret = 0;
 
 		rc = cmd->transport_complete_callback(cmd, true, &post_ret);
@@ -2133,12 +2133,12 @@ static void target_complete_ok_work(struct work_struct *work)
 	}
 
 queue_rsp:
-	switch (cmd->data_direction) {
+	switch (cmd->t_iostate.data_direction) {
 	case DMA_FROM_DEVICE:
 		if (cmd->scsi_status)
 			goto queue_status;
 
-		atomic_long_add(cmd->data_length,
+		atomic_long_add(cmd->t_iostate.data_length,
 				&cmd->se_lun->lun_stats.tx_data_octets);
 		/*
 		 * Perform READ_STRIP of PI using software emulation when
@@ -2162,13 +2162,13 @@ queue_rsp:
 			goto queue_full;
 		break;
 	case DMA_TO_DEVICE:
-		atomic_long_add(cmd->data_length,
+		atomic_long_add(cmd->t_iostate.data_length,
 				&cmd->se_lun->lun_stats.rx_data_octets);
 		/*
 		 * Check if we need to send READ payload for BIDI-COMMAND
 		 */
 		if (cmd->se_cmd_flags & SCF_BIDI) {
-			atomic_long_add(cmd->data_length,
+			atomic_long_add(cmd->t_iostate.data_length,
 					&cmd->se_lun->lun_stats.tx_data_octets);
 			ret = cmd->se_tfo->queue_data_in(cmd);
 			if (ret == -EAGAIN || ret == -ENOMEM)
@@ -2193,7 +2193,7 @@ queue_status:
 
 queue_full:
 	pr_debug("Handling complete_ok QUEUE_FULL: se_cmd: %p,"
-		" data_direction: %d\n", cmd, cmd->data_direction);
+		" t_iostate.data_direction: %d\n", cmd, cmd->t_iostate.data_direction);
 	cmd->t_state = TRANSPORT_COMPLETE_QF_OK;
 	transport_handle_queue_full(cmd, cmd->se_dev);
 }
@@ -2381,11 +2381,11 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 	int ret = 0;
 	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
 
-	if (cmd->prot_op != TARGET_PROT_NORMAL &&
+	if (cmd->t_iostate.prot_op != TARGET_PROT_NORMAL &&
 	    !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
 		ret = target_alloc_sgl(&cmd->t_iomem.t_prot_sg,
 				       &cmd->t_iomem.t_prot_nents,
-				       cmd->prot_length, true, false);
+				       cmd->t_iostate.prot_length, true, false);
 		if (ret < 0)
 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
@@ -2396,17 +2396,17 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 	 * beforehand.
 	 */
 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) &&
-	    cmd->data_length) {
+	    cmd->t_iostate.data_length) {
 
 		if ((cmd->se_cmd_flags & SCF_BIDI) ||
 		    (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE)) {
 			u32 bidi_length;
 
 			if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE)
-				bidi_length = cmd->t_task_nolb *
+				bidi_length = cmd->t_iostate.t_task_nolb *
 					      cmd->se_dev->dev_attrib.block_size;
 			else
-				bidi_length = cmd->data_length;
+				bidi_length = cmd->t_iostate.data_length;
 
 			ret = target_alloc_sgl(&cmd->t_iomem.t_bidi_data_sg,
 					       &cmd->t_iomem.t_bidi_data_nents,
@@ -2417,16 +2417,16 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 
 		ret = target_alloc_sgl(&cmd->t_iomem.t_data_sg,
 				       &cmd->t_iomem.t_data_nents,
-				       cmd->data_length, zero_flag, false);
+				       cmd->t_iostate.data_length, zero_flag, false);
 		if (ret < 0)
 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	} else if ((cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) &&
-		    cmd->data_length) {
+		    cmd->t_iostate.data_length) {
 		/*
 		 * Special case for COMPARE_AND_WRITE with fabrics
 		 * using SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC.
 		 */
-		u32 caw_length = cmd->t_task_nolb *
+		u32 caw_length = cmd->t_iostate.t_task_nolb *
 				 cmd->se_dev->dev_attrib.block_size;
 
 		ret = target_alloc_sgl(&cmd->t_iomem.t_bidi_data_sg,
@@ -2441,7 +2441,7 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 	 * and let it call back once the write buffers are ready.
 	 */
 	target_add_to_state_list(cmd);
-	if (cmd->data_direction != DMA_TO_DEVICE || cmd->data_length == 0) {
+	if (cmd->t_iostate.data_direction != DMA_TO_DEVICE || cmd->t_iostate.data_length == 0) {
 		target_execute_cmd(cmd);
 		return 0;
 	}
@@ -2919,7 +2919,7 @@ static int translate_sense_reason(struct se_cmd *cmd, sense_reason_t reason)
 	if (si->add_sector_info)
 		return scsi_set_sense_information(buffer,
 						  cmd->scsi_sense_length,
-						  cmd->bad_sector);
+						  cmd->t_iostate.bad_sector);
 
 	return 0;
 }
@@ -3016,7 +3016,7 @@ void transport_send_task_abort(struct se_cmd *cmd)
 	 * response.  This response with TASK_ABORTED status will be
 	 * queued back to fabric module by transport_check_aborted_status().
 	 */
-	if (cmd->data_direction == DMA_TO_DEVICE) {
+	if (cmd->t_iostate.data_direction == DMA_TO_DEVICE) {
 		if (cmd->se_tfo->write_pending_status(cmd) != 0) {
 			spin_lock_irqsave(&cmd->t_state_lock, flags);
 			if (cmd->se_cmd_flags & SCF_SEND_DELAYED_TAS) {
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 5013611..d6758a1 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -427,7 +427,7 @@ static int tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
 
 	mb = udev->mb_addr;
 	cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
-	data_length = se_cmd->data_length;
+	data_length = se_cmd->t_iostate.data_length;
 	if (se_cmd->se_cmd_flags & SCF_BIDI) {
 		BUG_ON(!(se_cmd->t_iomem.t_bidi_data_sg &&
 			 se_cmd->t_iomem.t_bidi_data_nents));
@@ -493,7 +493,7 @@ static int tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
 	 */
 	iov = &entry->req.iov[0];
 	iov_cnt = 0;
-	copy_to_data_area = (se_cmd->data_direction == DMA_TO_DEVICE
+	copy_to_data_area = (se_cmd->t_iostate.data_direction == DMA_TO_DEVICE
 		|| se_cmd->se_cmd_flags & SCF_BIDI);
 	alloc_and_scatter_data_area(udev, se_cmd->t_iomem.t_data_sg,
 		se_cmd->t_iomem.t_data_nents, &iov, &iov_cnt, copy_to_data_area);
@@ -587,18 +587,18 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
 		gather_data_area(udev, bitmap,
 			se_cmd->t_iomem.t_bidi_data_sg, se_cmd->t_iomem.t_bidi_data_nents);
 		free_data_area(udev, cmd);
-	} else if (se_cmd->data_direction == DMA_FROM_DEVICE) {
+	} else if (se_cmd->t_iostate.data_direction == DMA_FROM_DEVICE) {
 		DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
 
 		bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
 		gather_data_area(udev, bitmap,
 			se_cmd->t_iomem.t_data_sg, se_cmd->t_iomem.t_data_nents);
 		free_data_area(udev, cmd);
-	} else if (se_cmd->data_direction == DMA_TO_DEVICE) {
+	} else if (se_cmd->t_iostate.data_direction == DMA_TO_DEVICE) {
 		free_data_area(udev, cmd);
-	} else if (se_cmd->data_direction != DMA_NONE) {
+	} else if (se_cmd->t_iostate.data_direction != DMA_NONE) {
 		pr_warn("TCMU: data direction was %d!\n",
-			se_cmd->data_direction);
+			se_cmd->t_iostate.data_direction);
 	}
 
 	target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status);
diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
index 0a4bd8a..b6aeb15 100644
--- a/drivers/target/target_core_xcopy.c
+++ b/drivers/target/target_core_xcopy.c
@@ -564,7 +564,7 @@ static int target_xcopy_setup_pt_cmd(
 	if (alloc_mem) {
 		rc = target_alloc_sgl(&cmd->t_iomem.t_data_sg,
 				      &cmd->t_iomem.t_data_nents,
-				      cmd->data_length, false, false);
+				      cmd->t_iostate.data_length, false, false);
 		if (rc < 0) {
 			ret = rc;
 			goto out;
@@ -607,7 +607,7 @@ static int target_xcopy_issue_pt_cmd(struct xcopy_pt_cmd *xpt_cmd)
 	if (sense_rc)
 		return -EINVAL;
 
-	if (se_cmd->data_direction == DMA_TO_DEVICE)
+	if (se_cmd->t_iostate.data_direction == DMA_TO_DEVICE)
 		target_execute_cmd(se_cmd);
 
 	wait_for_completion_interruptible(&xpt_cmd->xpt_passthrough_sem);
@@ -927,9 +927,9 @@ static sense_reason_t target_rcr_operating_parameters(struct se_cmd *se_cmd)
 		return TCM_OUT_OF_RESOURCES;
 	}
 
-	if (se_cmd->data_length < 54) {
+	if (se_cmd->t_iostate.data_length < 54) {
 		pr_err("Receive Copy Results Op Parameters length"
-		       " too small: %u\n", se_cmd->data_length);
+		       " too small: %u\n", se_cmd->t_iostate.data_length);
 		transport_kunmap_data_sg(se_cmd);
 		return TCM_INVALID_CDB_FIELD;
 	}
@@ -1013,7 +1013,7 @@ sense_reason_t target_do_receive_copy_results(struct se_cmd *se_cmd)
 	sense_reason_t rc = TCM_NO_SENSE;
 
 	pr_debug("Entering target_do_receive_copy_results: SA: 0x%02x, List ID:"
-		" 0x%02x, AL: %u\n", sa, list_id, se_cmd->data_length);
+		" 0x%02x, AL: %u\n", sa, list_id, se_cmd->t_iostate.data_length);
 
 	if (list_id != 0) {
 		pr_err("Receive Copy Results with non zero list identifier"
diff --git a/drivers/target/tcm_fc/tfc_cmd.c b/drivers/target/tcm_fc/tfc_cmd.c
index 04c98d0..b3730e9 100644
--- a/drivers/target/tcm_fc/tfc_cmd.c
+++ b/drivers/target/tcm_fc/tfc_cmd.c
@@ -56,7 +56,7 @@ static void _ft_dump_cmd(struct ft_cmd *cmd, const char *caller)
 
 	pr_debug("%s: cmd %p data_nents %u len %u se_cmd_flags <0x%x>\n",
 		caller, cmd, se_cmd->t_iomem.t_data_nents,
-	       se_cmd->data_length, se_cmd->se_cmd_flags);
+	       se_cmd->t_iostate.data_length, se_cmd->se_cmd_flags);
 
 	for_each_sg(se_cmd->t_iomem.t_data_sg, sg, se_cmd->t_iomem.t_data_nents, count)
 		pr_debug("%s: cmd %p sg %p page %p "
@@ -191,7 +191,7 @@ int ft_write_pending_status(struct se_cmd *se_cmd)
 {
 	struct ft_cmd *cmd = container_of(se_cmd, struct ft_cmd, se_cmd);
 
-	return cmd->write_data_len != se_cmd->data_length;
+	return cmd->write_data_len != se_cmd->t_iostate.data_length;
 }
 
 /*
@@ -219,7 +219,7 @@ int ft_write_pending(struct se_cmd *se_cmd)
 
 	txrdy = fc_frame_payload_get(fp, sizeof(*txrdy));
 	memset(txrdy, 0, sizeof(*txrdy));
-	txrdy->ft_burst_len = htonl(se_cmd->data_length);
+	txrdy->ft_burst_len = htonl(se_cmd->t_iostate.data_length);
 
 	cmd->seq = lport->tt.seq_start_next(cmd->seq);
 	fc_fill_fc_hdr(fp, FC_RCTL_DD_DATA_DESC, ep->did, ep->sid, FC_TYPE_FCP,
diff --git a/drivers/target/tcm_fc/tfc_io.c b/drivers/target/tcm_fc/tfc_io.c
index 86ae4c5..0c98099 100644
--- a/drivers/target/tcm_fc/tfc_io.c
+++ b/drivers/target/tcm_fc/tfc_io.c
@@ -84,7 +84,7 @@ int ft_queue_data_in(struct se_cmd *se_cmd)
 	lport = ep->lp;
 	cmd->seq = lport->tt.seq_start_next(cmd->seq);
 
-	remaining = se_cmd->data_length;
+	remaining = se_cmd->t_iostate.data_length;
 
 	/*
 	 * Setup to use first mem list entry, unless no data.
@@ -279,10 +279,10 @@ void ft_recv_write_data(struct ft_cmd *cmd, struct fc_frame *fp)
 		goto drop;
 	frame_len -= sizeof(*fh);
 	from = fc_frame_payload_get(fp, 0);
-	if (rel_off >= se_cmd->data_length)
+	if (rel_off >= se_cmd->t_iostate.data_length)
 		goto drop;
-	if (frame_len + rel_off > se_cmd->data_length)
-		frame_len = se_cmd->data_length - rel_off;
+	if (frame_len + rel_off > se_cmd->t_iostate.data_length)
+		frame_len = se_cmd->t_iostate.data_length - rel_off;
 
 	/*
 	 * Setup to use first mem list entry, unless no data.
@@ -328,7 +328,7 @@ void ft_recv_write_data(struct ft_cmd *cmd, struct fc_frame *fp)
 		cmd->write_data_len += tlen;
 	}
 last_frame:
-	if (cmd->write_data_len == se_cmd->data_length) {
+	if (cmd->write_data_len == se_cmd->t_iostate.data_length) {
 		INIT_WORK(&cmd->work, ft_execute_work);
 		queue_work(cmd->sess->tport->tpg->workqueue, &cmd->work);
 	}
diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
index 8986132..66a2803 100644
--- a/drivers/usb/gadget/function/f_tcm.c
+++ b/drivers/usb/gadget/function/f_tcm.c
@@ -213,14 +213,14 @@ static int bot_send_read_response(struct usbg_cmd *cmd)
 	}
 
 	if (!gadget->sg_supported) {
-		cmd->data_buf = kmalloc(se_cmd->data_length, GFP_ATOMIC);
+		cmd->data_buf = kmalloc(se_cmd->t_iostate.data_length, GFP_ATOMIC);
 		if (!cmd->data_buf)
 			return -ENOMEM;
 
 		sg_copy_to_buffer(se_cmd->t_iomem.t_data_sg,
 				se_cmd->t_iomem.t_data_nents,
 				cmd->data_buf,
-				se_cmd->data_length);
+				se_cmd->t_iostate.data_length);
 
 		fu->bot_req_in->buf = cmd->data_buf;
 	} else {
@@ -230,7 +230,7 @@ static int bot_send_read_response(struct usbg_cmd *cmd)
 	}
 
 	fu->bot_req_in->complete = bot_read_compl;
-	fu->bot_req_in->length = se_cmd->data_length;
+	fu->bot_req_in->length = se_cmd->t_iostate.data_length;
 	fu->bot_req_in->context = cmd;
 	ret = usb_ep_queue(fu->ep_in, fu->bot_req_in, GFP_ATOMIC);
 	if (ret)
@@ -257,7 +257,7 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
 	}
 
 	if (!gadget->sg_supported) {
-		cmd->data_buf = kmalloc(se_cmd->data_length, GFP_KERNEL);
+		cmd->data_buf = kmalloc(se_cmd->t_iostate.data_length, GFP_KERNEL);
 		if (!cmd->data_buf)
 			return -ENOMEM;
 
@@ -269,7 +269,7 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
 	}
 
 	fu->bot_req_out->complete = usbg_data_write_cmpl;
-	fu->bot_req_out->length = se_cmd->data_length;
+	fu->bot_req_out->length = se_cmd->t_iostate.data_length;
 	fu->bot_req_out->context = cmd;
 
 	ret = usbg_prepare_w_request(cmd, fu->bot_req_out);
@@ -515,14 +515,14 @@ static int uasp_prepare_r_request(struct usbg_cmd *cmd)
 	struct uas_stream *stream = cmd->stream;
 
 	if (!gadget->sg_supported) {
-		cmd->data_buf = kmalloc(se_cmd->data_length, GFP_ATOMIC);
+		cmd->data_buf = kmalloc(se_cmd->t_iostate.data_length, GFP_ATOMIC);
 		if (!cmd->data_buf)
 			return -ENOMEM;
 
 		sg_copy_to_buffer(se_cmd->t_iomem.t_data_sg,
 				se_cmd->t_iomem.t_data_nents,
 				cmd->data_buf,
-				se_cmd->data_length);
+				se_cmd->t_iostate.data_length);
 
 		stream->req_in->buf = cmd->data_buf;
 	} else {
@@ -532,7 +532,7 @@ static int uasp_prepare_r_request(struct usbg_cmd *cmd)
 	}
 
 	stream->req_in->complete = uasp_status_data_cmpl;
-	stream->req_in->length = se_cmd->data_length;
+	stream->req_in->length = se_cmd->t_iostate.data_length;
 	stream->req_in->context = cmd;
 
 	cmd->state = UASP_SEND_STATUS;
@@ -963,7 +963,7 @@ static void usbg_data_write_cmpl(struct usb_ep *ep, struct usb_request *req)
 		sg_copy_from_buffer(se_cmd->t_iomem.t_data_sg,
 				se_cmd->t_iomem.t_data_nents,
 				cmd->data_buf,
-				se_cmd->data_length);
+				se_cmd->t_iostate.data_length);
 	}
 
 	complete(&cmd->write_complete);
@@ -980,7 +980,7 @@ static int usbg_prepare_w_request(struct usbg_cmd *cmd, struct usb_request *req)
 	struct usb_gadget *gadget = fuas_to_gadget(fu);
 
 	if (!gadget->sg_supported) {
-		cmd->data_buf = kmalloc(se_cmd->data_length, GFP_ATOMIC);
+		cmd->data_buf = kmalloc(se_cmd->t_iostate.data_length, GFP_ATOMIC);
 		if (!cmd->data_buf)
 			return -ENOMEM;
 
@@ -992,7 +992,7 @@ static int usbg_prepare_w_request(struct usbg_cmd *cmd, struct usb_request *req)
 	}
 
 	req->complete = usbg_data_write_cmpl;
-	req->length = se_cmd->data_length;
+	req->length = se_cmd->t_iostate.data_length;
 	req->context = cmd;
 	return 0;
 }
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 9d6320e..ac46c55 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -793,7 +793,7 @@ static void vhost_scsi_submission_work(struct work_struct *work)
 		if (cmd->tvc_prot_sgl_count)
 			sg_prot_ptr = cmd->tvc_prot_sgl;
 		else
-			se_cmd->prot_pto = true;
+			se_cmd->t_iostate.prot_pto = true;
 	} else {
 		sg_ptr = NULL;
 	}
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 29ee45b..bd4346b 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -449,6 +449,29 @@ struct target_iomem {
 	unsigned int		t_prot_nents;
 };
 
+struct target_iostate {
+	unsigned long long	t_task_lba;
+	unsigned int		t_task_nolb;
+	/* Total size in bytes associated with command */
+	unsigned int		data_length;
+	/* See include/linux/dma-mapping.h */
+	enum dma_data_direction data_direction;
+
+	/* DIF related members */
+	enum target_prot_op	prot_op;
+	enum target_prot_type	prot_type;
+	u8			prot_checks;
+	bool			prot_pto;
+	u32			prot_length;
+	u32			reftag_seed;
+	sector_t		bad_sector;
+
+	struct target_iomem	*iomem;
+	struct se_device	*se_dev;
+	void			(*t_comp_func)(struct target_iostate *, u16);
+	void			*priv;
+};
+
 struct se_cmd {
 	/* SAM response code being sent to initiator */
 	u8			scsi_status;
@@ -461,8 +484,6 @@ struct se_cmd {
 	u64			tag; /* SAM command identifier aka task tag */
 	/* Delay for ALUA Active/NonOptimized state access in milliseconds */
 	int			alua_nonop_delay;
-	/* See include/linux/dma-mapping.h */
-	enum dma_data_direction	data_direction;
 	/* For SAM Task Attribute */
 	int			sam_task_attr;
 	/* Used for se_sess->sess_tag_pool */
@@ -471,8 +492,6 @@ struct se_cmd {
 	enum transport_state_table t_state;
 	/* See se_cmd_flags_table */
 	u32			se_cmd_flags;
-	/* Total size in bytes associated with command */
-	u32			data_length;
 	u32			residual_count;
 	u64			orig_fe_lun;
 	/* Persistent Reservation key */
@@ -495,8 +514,7 @@ struct se_cmd {
 
 	unsigned char		*t_task_cdb;
 	unsigned char		__t_task_cdb[TCM_MAX_COMMAND_SIZE];
-	unsigned long long	t_task_lba;
-	unsigned int		t_task_nolb;
+
 	unsigned int		transport_state;
 #define CMD_T_ABORTED		(1 << 0)
 #define CMD_T_ACTIVE		(1 << 1)
@@ -512,6 +530,7 @@ struct se_cmd {
 	struct completion	t_transport_stop_comp;
 
 	struct work_struct	work;
+	struct target_iostate	t_iostate;
 	struct target_iomem	t_iomem;
 
 	/* Used for lun->lun_ref counting */
@@ -522,15 +541,7 @@ struct se_cmd {
 	/* backend private data */
 	void			*priv;
 
-	/* DIF related members */
-	enum target_prot_op	prot_op;
-	enum target_prot_type	prot_type;
-	u8			prot_checks;
-	bool			prot_pto;
-	u32			prot_length;
-	u32			reftag_seed;
 	sense_reason_t		pi_err;
-	sector_t		bad_sector;
 	int			cpuid;
 };
 
diff --git a/include/target/target_core_fabric.h b/include/target/target_core_fabric.h
index 5cd6faa..86b84d0d 100644
--- a/include/target/target_core_fabric.h
+++ b/include/target/target_core_fabric.h
@@ -197,7 +197,7 @@ target_reverse_dma_direction(struct se_cmd *se_cmd)
 	if (se_cmd->se_cmd_flags & SCF_BIDI)
 		return DMA_BIDIRECTIONAL;
 
-	switch (se_cmd->data_direction) {
+	switch (se_cmd->t_iostate.data_direction) {
 	case DMA_TO_DEVICE:
 		return DMA_FROM_DEVICE;
 	case DMA_FROM_DEVICE:
diff --git a/include/trace/events/target.h b/include/trace/events/target.h
index 50fea66..7c3743d 100644
--- a/include/trace/events/target.h
+++ b/include/trace/events/target.h
@@ -146,7 +146,7 @@ TRACE_EVENT(target_sequencer_start,
 	TP_fast_assign(
 		__entry->unpacked_lun	= cmd->orig_fe_lun;
 		__entry->opcode		= cmd->t_task_cdb[0];
-		__entry->data_length	= cmd->data_length;
+		__entry->data_length	= cmd->t_iostate.data_length;
 		__entry->task_attribute	= cmd->sam_task_attr;
 		memcpy(__entry->cdb, cmd->t_task_cdb, TCM_MAX_COMMAND_SIZE);
 		__assign_str(initiator, cmd->se_sess->se_node_acl->initiatorname);
@@ -184,7 +184,7 @@ TRACE_EVENT(target_cmd_complete,
 	TP_fast_assign(
 		__entry->unpacked_lun	= cmd->orig_fe_lun;
 		__entry->opcode		= cmd->t_task_cdb[0];
-		__entry->data_length	= cmd->data_length;
+		__entry->data_length	= cmd->t_iostate.data_length;
 		__entry->task_attribute	= cmd->sam_task_attr;
 		__entry->scsi_status	= cmd->scsi_status;
 		__entry->sense_length	= cmd->scsi_status == SAM_STAT_CHECK_CONDITION ?
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 04/14] target: Add target_complete_ios wrapper
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (2 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 03/14] target: Add target_iostate descriptor Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 05/14] target: Setup target_iostate memory in __target_execute_cmd Nicholas A. Bellinger
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch adds a basic target_complete_ios() wrapper to
dereference struct se_cmd from struct target_iostate, and
invoke existing target_complete_cmd() code.

It also includes PSCSI + TCMU backend driver conversions.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_pscsi.c     | 4 ++--
 drivers/target/target_core_transport.c | 9 ++++++++-
 drivers/target/target_core_user.c      | 4 ++--
 include/target/target_core_backend.h   | 1 +
 4 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 105894a..b5728bc 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -1109,13 +1109,13 @@ static void pscsi_req_done(struct request *req, int uptodate)
 
 	switch (host_byte(pt->pscsi_result)) {
 	case DID_OK:
-		target_complete_cmd(cmd, cmd->scsi_status);
+		target_complete_ios(&cmd->t_iostate, cmd->scsi_status);
 		break;
 	default:
 		pr_debug("PSCSI Host Byte exception at cmd: %p CDB:"
 			" 0x%02x Result: 0x%08x\n", cmd, pt->pscsi_cdb[0],
 			pt->pscsi_result);
-		target_complete_cmd(cmd, SAM_STAT_CHECK_CONDITION);
+		target_complete_ios(&cmd->t_iostate, SAM_STAT_CHECK_CONDITION);
 		break;
 	}
 
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 18661da..2207624 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -714,7 +714,6 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
 
 	cmd->scsi_status = scsi_status;
 
-
 	spin_lock_irqsave(&cmd->t_state_lock, flags);
 	cmd->transport_state &= ~CMD_T_BUSY;
 
@@ -752,6 +751,14 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status)
 }
 EXPORT_SYMBOL(target_complete_cmd);
 
+void target_complete_ios(struct target_iostate *ios, u16 scsi_status)
+{
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+
+	target_complete_cmd(cmd, scsi_status);
+}
+EXPORT_SYMBOL(target_complete_ios);
+
 void target_complete_cmd_with_length(struct se_cmd *cmd, u8 scsi_status, int length)
 {
 	if (scsi_status == SAM_STAT_GOOD && length < cmd->t_iostate.data_length) {
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index d6758a1..505b312 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -601,7 +601,7 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
 			se_cmd->t_iostate.data_direction);
 	}
 
-	target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status);
+	target_complete_ios(&cmd->se_cmd->t_iostate, entry->rsp.scsi_status);
 	cmd->se_cmd = NULL;
 
 	kmem_cache_free(tcmu_cmd_cache, cmd);
@@ -680,7 +680,7 @@ static int tcmu_check_expired_cmd(int id, void *p, void *data)
 		return 0;
 
 	set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
-	target_complete_cmd(cmd->se_cmd, SAM_STAT_CHECK_CONDITION);
+	target_complete_ios(&cmd->se_cmd->t_iostate, SAM_STAT_CHECK_CONDITION);
 	cmd->se_cmd = NULL;
 
 	kmem_cache_free(tcmu_cmd_cache, cmd);
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index d8ab510..2f6deb0 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -58,6 +58,7 @@ void	target_backend_unregister(const struct target_backend_ops *);
 
 void	target_complete_cmd(struct se_cmd *, u8);
 void	target_complete_cmd_with_length(struct se_cmd *, u8, int);
+void	target_complete_ios(struct target_iostate *, u16);
 
 sense_reason_t	spc_parse_cdb(struct se_cmd *cmd, unsigned int *size);
 sense_reason_t	spc_emulate_report_luns(struct se_cmd *cmd);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 05/14] target: Setup target_iostate memory in __target_execute_cmd
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (3 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 04/14] target: Add target_complete_ios wrapper Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 06/14] target: Convert se_cmd->execute_cmd to target_iostate Nicholas A. Bellinger
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch sets up the required target_iostate pointers to
se_cmd->execute_cmd() via existing sbc_ops.

This includes:

   - struct se_device,
   - struct target_iomem,
   - and ->t_comp_func() callback.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_transport.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 2207624..4156059 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1797,6 +1797,12 @@ void __target_execute_cmd(struct se_cmd *cmd, bool do_checks)
 			goto err;
 		}
 	}
+	/*
+	 * Setup t_iostate + t_iomem for backend device submission
+	 */
+	cmd->t_iostate.se_dev = cmd->se_dev;
+	cmd->t_iostate.iomem = &cmd->t_iomem;
+	cmd->t_iostate.t_comp_func = &target_complete_ios;
 
 	ret = cmd->execute_cmd(cmd);
 	if (!ret)
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 06/14] target: Convert se_cmd->execute_cmd to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (4 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 05/14] target: Setup target_iostate memory in __target_execute_cmd Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 07/14] target/sbc: Convert sbc_ops->execute_rw " Nicholas A. Bellinger
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts the se_cmd->execute_cmd() caller
to accept target_iostate, and updates existing users
tree-wide.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_alua.c      |  9 ++++---
 drivers/target/target_core_alua.h      |  6 ++---
 drivers/target/target_core_device.c    |  2 +-
 drivers/target/target_core_pr.c        | 12 ++++++---
 drivers/target/target_core_pr.h        |  8 +++---
 drivers/target/target_core_pscsi.c     |  5 ++--
 drivers/target/target_core_sbc.c       | 47 ++++++++++++++++++++++++++--------
 drivers/target/target_core_spc.c       | 19 +++++++++-----
 drivers/target/target_core_transport.c |  2 +-
 drivers/target/target_core_user.c      |  3 ++-
 drivers/target/target_core_xcopy.c     |  6 +++--
 drivers/target/target_core_xcopy.h     |  4 +--
 include/target/target_core_backend.h   |  4 +--
 include/target/target_core_base.h      |  2 +-
 14 files changed, 86 insertions(+), 43 deletions(-)

diff --git a/drivers/target/target_core_alua.c b/drivers/target/target_core_alua.c
index c806a96..5cb1c88 100644
--- a/drivers/target/target_core_alua.c
+++ b/drivers/target/target_core_alua.c
@@ -63,8 +63,9 @@ struct t10_alua_lu_gp *default_lu_gp;
  * See sbc3r35 section 5.23
  */
 sense_reason_t
-target_emulate_report_referrals(struct se_cmd *cmd)
+target_emulate_report_referrals(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct t10_alua_lba_map *map;
 	struct t10_alua_lba_map_member *map_mem;
@@ -143,8 +144,9 @@ target_emulate_report_referrals(struct se_cmd *cmd)
  * See spc4r17 section 6.27
  */
 sense_reason_t
-target_emulate_report_target_port_groups(struct se_cmd *cmd)
+target_emulate_report_target_port_groups(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct t10_alua_tg_pt_gp *tg_pt_gp;
 	struct se_lun *lun;
@@ -276,8 +278,9 @@ target_emulate_report_target_port_groups(struct se_cmd *cmd)
  * See spc4r17 section 6.35
  */
 sense_reason_t
-target_emulate_set_target_port_groups(struct se_cmd *cmd)
+target_emulate_set_target_port_groups(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct se_lun *l_lun = cmd->se_lun;
 	struct se_node_acl *nacl = cmd->se_sess->se_node_acl;
diff --git a/drivers/target/target_core_alua.h b/drivers/target/target_core_alua.h
index 9b250f9..1e84af2 100644
--- a/drivers/target/target_core_alua.h
+++ b/drivers/target/target_core_alua.h
@@ -88,9 +88,9 @@ extern struct kmem_cache *t10_alua_tg_pt_gp_cache;
 extern struct kmem_cache *t10_alua_lba_map_cache;
 extern struct kmem_cache *t10_alua_lba_map_mem_cache;
 
-extern sense_reason_t target_emulate_report_target_port_groups(struct se_cmd *);
-extern sense_reason_t target_emulate_set_target_port_groups(struct se_cmd *);
-extern sense_reason_t target_emulate_report_referrals(struct se_cmd *);
+extern sense_reason_t target_emulate_report_target_port_groups(struct target_iostate *);
+extern sense_reason_t target_emulate_set_target_port_groups(struct target_iostate *);
+extern sense_reason_t target_emulate_report_referrals(struct target_iostate *);
 extern int core_alua_check_nonop_delay(struct se_cmd *);
 extern int core_alua_do_port_transition(struct t10_alua_tg_pt_gp *,
 				struct se_device *, struct se_lun *,
diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
index 910c990..d82b9b4 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -1026,7 +1026,7 @@ void core_dev_release_virtual_lun0(void)
  */
 sense_reason_t
 passthrough_parse_cdb(struct se_cmd *cmd,
-	sense_reason_t (*exec_cmd)(struct se_cmd *cmd))
+	sense_reason_t (*exec_cmd)(struct target_iostate *ios))
 {
 	unsigned char *cdb = cmd->t_task_cdb;
 
diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
index f91058f..022084f 100644
--- a/drivers/target/target_core_pr.c
+++ b/drivers/target/target_core_pr.c
@@ -197,8 +197,9 @@ static int target_check_scsi2_reservation_conflict(struct se_cmd *cmd)
 }
 
 sense_reason_t
-target_scsi2_reservation_release(struct se_cmd *cmd)
+target_scsi2_reservation_release(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct se_session *sess = cmd->se_sess;
 	struct se_portal_group *tpg;
@@ -243,8 +244,9 @@ out:
 }
 
 sense_reason_t
-target_scsi2_reservation_reserve(struct se_cmd *cmd)
+target_scsi2_reservation_reserve(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct se_session *sess = cmd->se_sess;
 	struct se_portal_group *tpg;
@@ -3565,8 +3567,9 @@ static unsigned long long core_scsi3_extract_reservation_key(unsigned char *cdb)
  * See spc4r17 section 6.14 Table 170
  */
 sense_reason_t
-target_scsi3_emulate_pr_out(struct se_cmd *cmd)
+target_scsi3_emulate_pr_out(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	unsigned char *cdb = &cmd->t_task_cdb[0];
 	unsigned char *buf;
@@ -4092,8 +4095,9 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd)
 }
 
 sense_reason_t
-target_scsi3_emulate_pr_in(struct se_cmd *cmd)
+target_scsi3_emulate_pr_in(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	sense_reason_t ret;
 
 	/*
diff --git a/drivers/target/target_core_pr.h b/drivers/target/target_core_pr.h
index e3d26e9..181bf97 100644
--- a/drivers/target/target_core_pr.h
+++ b/drivers/target/target_core_pr.h
@@ -52,8 +52,8 @@ extern struct kmem_cache *t10_pr_reg_cache;
 
 extern void core_pr_dump_initiator_port(struct t10_pr_registration *,
 			char *, u32);
-extern sense_reason_t target_scsi2_reservation_release(struct se_cmd *);
-extern sense_reason_t target_scsi2_reservation_reserve(struct se_cmd *);
+extern sense_reason_t target_scsi2_reservation_release(struct target_iostate *);
+extern sense_reason_t target_scsi2_reservation_reserve(struct target_iostate *);
 extern int core_scsi3_alloc_aptpl_registration(
 			struct t10_reservation *, u64,
 			unsigned char *, unsigned char *, u64,
@@ -66,8 +66,8 @@ extern void core_scsi3_free_pr_reg_from_nacl(struct se_device *,
 extern void core_scsi3_free_all_registrations(struct se_device *);
 extern unsigned char *core_scsi3_pr_dump_type(int);
 
-extern sense_reason_t target_scsi3_emulate_pr_in(struct se_cmd *);
-extern sense_reason_t target_scsi3_emulate_pr_out(struct se_cmd *);
+extern sense_reason_t target_scsi3_emulate_pr_in(struct target_iostate *ios);
+extern sense_reason_t target_scsi3_emulate_pr_out(struct target_iostate *ios);
 extern sense_reason_t target_check_reservation(struct se_cmd *);
 
 #endif /* TARGET_CORE_PR_H */
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index b5728bc..c52f943 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -54,7 +54,7 @@ static inline struct pscsi_dev_virt *PSCSI_DEV(struct se_device *dev)
 	return container_of(dev, struct pscsi_dev_virt, dev);
 }
 
-static sense_reason_t pscsi_execute_cmd(struct se_cmd *cmd);
+static sense_reason_t pscsi_execute_cmd(struct target_iostate *ios);
 static void pscsi_req_done(struct request *, int);
 
 /*	pscsi_attach_hba():
@@ -988,8 +988,9 @@ pscsi_parse_cdb(struct se_cmd *cmd)
 }
 
 static sense_reason_t
-pscsi_execute_cmd(struct se_cmd *cmd)
+pscsi_execute_cmd(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct scatterlist *sgl = cmd->t_iomem.t_data_sg;
 	u32 sgl_nents = cmd->t_iomem.t_data_nents;
 	enum dma_data_direction data_direction = cmd->t_iostate.data_direction;
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 744ef71..2095f78 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -38,11 +38,12 @@
 
 static sense_reason_t
 sbc_check_prot(struct se_device *, struct se_cmd *, unsigned char *, u32, bool);
-static sense_reason_t sbc_execute_unmap(struct se_cmd *cmd);
+static sense_reason_t sbc_execute_unmap(struct target_iostate *ios);
 
 static sense_reason_t
-sbc_emulate_readcapacity(struct se_cmd *cmd)
+sbc_emulate_readcapacity(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	unsigned char *cdb = cmd->t_task_cdb;
 	unsigned long long blocks_long = dev->transport->get_blocks(dev);
@@ -90,8 +91,9 @@ sbc_emulate_readcapacity(struct se_cmd *cmd)
 }
 
 static sense_reason_t
-sbc_emulate_readcapacity_16(struct se_cmd *cmd)
+sbc_emulate_readcapacity_16(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct se_session *sess = cmd->se_sess;
 	int pi_prot_type = dev->dev_attrib.pi_prot_type;
@@ -163,8 +165,9 @@ sbc_emulate_readcapacity_16(struct se_cmd *cmd)
 }
 
 static sense_reason_t
-sbc_emulate_startstop(struct se_cmd *cmd)
+sbc_emulate_startstop(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	unsigned char *cdb = cmd->t_task_cdb;
 
 	/*
@@ -218,8 +221,9 @@ sector_t sbc_get_write_same_sectors(struct se_cmd *cmd)
 EXPORT_SYMBOL(sbc_get_write_same_sectors);
 
 static sense_reason_t
-sbc_execute_write_same_unmap(struct se_cmd *cmd)
+sbc_execute_write_same_unmap(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
 	sector_t nolb = sbc_get_write_same_sectors(cmd);
 	sense_reason_t ret;
@@ -235,8 +239,10 @@ sbc_execute_write_same_unmap(struct se_cmd *cmd)
 }
 
 static sense_reason_t
-sbc_emulate_noop(struct se_cmd *cmd)
+sbc_emulate_noop(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+
 	target_complete_cmd(cmd, GOOD);
 	return 0;
 }
@@ -318,6 +324,14 @@ static inline unsigned long long transport_lba_64_ext(unsigned char *cdb)
 	return ((unsigned long long)__v2) | (unsigned long long)__v1 << 32;
 }
 
+static sense_reason_t sbc_execute_write_same(struct target_iostate *ios)
+{
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+	struct sbc_ops *ops = cmd->protocol_data;
+
+	return ops->execute_write_same(cmd);
+}
+
 static sense_reason_t
 sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *ops)
 {
@@ -375,7 +389,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
 	if (ret)
 		return ret;
 
-	cmd->execute_cmd = ops->execute_write_same;
+	cmd->execute_cmd = &sbc_execute_write_same;
 	return 0;
 }
 
@@ -439,14 +453,23 @@ out:
 }
 
 static sense_reason_t
-sbc_execute_rw(struct se_cmd *cmd)
+sbc_execute_rw(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
 
 	return ops->execute_rw(cmd, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
 			       cmd->t_iostate.data_direction);
 }
 
+static sense_reason_t sbc_execute_sync_cache(struct target_iostate *ios)
+{
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+	struct sbc_ops *ops = cmd->protocol_data;
+
+	return ops->execute_sync_cache(cmd);
+}
+
 static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success,
 					     int *post_ret)
 {
@@ -626,8 +649,9 @@ out:
 }
 
 static sense_reason_t
-sbc_compare_and_write(struct se_cmd *cmd)
+sbc_compare_and_write(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
 	struct se_device *dev = cmd->se_dev;
 	sense_reason_t ret;
@@ -1054,7 +1078,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 			cmd->t_iostate.t_task_lba = transport_lba_64(cdb);
 		}
 		if (ops->execute_sync_cache) {
-			cmd->execute_cmd = ops->execute_sync_cache;
+			cmd->execute_cmd = sbc_execute_sync_cache;
 			goto check_lba;
 		}
 		size = 0;
@@ -1163,8 +1187,9 @@ u32 sbc_get_device_type(struct se_device *dev)
 EXPORT_SYMBOL(sbc_get_device_type);
 
 static sense_reason_t
-sbc_execute_unmap(struct se_cmd *cmd)
+sbc_execute_unmap(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
 	struct se_device *dev = cmd->se_dev;
 	unsigned char *buf, *ptr = NULL;
diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
index 2364de7..c672d9a 100644
--- a/drivers/target/target_core_spc.c
+++ b/drivers/target/target_core_spc.c
@@ -702,8 +702,9 @@ spc_emulate_evpd_00(struct se_cmd *cmd, unsigned char *buf)
 }
 
 static sense_reason_t
-spc_emulate_inquiry(struct se_cmd *cmd)
+spc_emulate_inquiry(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	struct se_portal_group *tpg = cmd->se_lun->lun_tpg;
 	unsigned char *rbuf;
@@ -982,8 +983,9 @@ static int spc_modesense_long_blockdesc(unsigned char *buf, u64 blocks, u32 bloc
 	return 17;
 }
 
-static sense_reason_t spc_emulate_modesense(struct se_cmd *cmd)
+static sense_reason_t spc_emulate_modesense(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = cmd->se_dev;
 	char *cdb = cmd->t_task_cdb;
 	unsigned char buf[SE_MODE_PAGE_BUF], *rbuf;
@@ -1107,8 +1109,9 @@ set_length:
 	return 0;
 }
 
-static sense_reason_t spc_emulate_modeselect(struct se_cmd *cmd)
+static sense_reason_t spc_emulate_modeselect(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	char *cdb = cmd->t_task_cdb;
 	bool ten = cdb[0] == MODE_SELECT_10;
 	int off = ten ? 8 : 4;
@@ -1168,8 +1171,9 @@ out:
 	return ret;
 }
 
-static sense_reason_t spc_emulate_request_sense(struct se_cmd *cmd)
+static sense_reason_t spc_emulate_request_sense(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	unsigned char *cdb = cmd->t_task_cdb;
 	unsigned char *rbuf;
 	u8 ua_asc = 0, ua_ascq = 0;
@@ -1201,8 +1205,9 @@ static sense_reason_t spc_emulate_request_sense(struct se_cmd *cmd)
 	return 0;
 }
 
-sense_reason_t spc_emulate_report_luns(struct se_cmd *cmd)
+sense_reason_t spc_emulate_report_luns(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_dev_entry *deve;
 	struct se_session *sess = cmd->se_sess;
 	struct se_node_acl *nacl;
@@ -1270,8 +1275,10 @@ done:
 EXPORT_SYMBOL(spc_emulate_report_luns);
 
 static sense_reason_t
-spc_emulate_testunitready(struct se_cmd *cmd)
+spc_emulate_testunitready(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+
 	target_complete_cmd(cmd, GOOD);
 	return 0;
 }
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 4156059..b6a3543 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1804,7 +1804,7 @@ void __target_execute_cmd(struct se_cmd *cmd, bool do_checks)
 	cmd->t_iostate.iomem = &cmd->t_iomem;
 	cmd->t_iostate.t_comp_func = &target_complete_ios;
 
-	ret = cmd->execute_cmd(cmd);
+	ret = cmd->execute_cmd(&cmd->t_iostate);
 	if (!ret)
 		return;
 err:
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 505b312..3467560 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1130,8 +1130,9 @@ static sector_t tcmu_get_blocks(struct se_device *dev)
 }
 
 static sense_reason_t
-tcmu_pass_op(struct se_cmd *se_cmd)
+tcmu_pass_op(struct target_iostate *ios)
 {
+	struct se_cmd *se_cmd = container_of(ios, struct se_cmd, t_iostate);
 	int ret = tcmu_queue_cmd(se_cmd);
 
 	if (ret != 0)
diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c
index b6aeb15..04d064f 100644
--- a/drivers/target/target_core_xcopy.c
+++ b/drivers/target/target_core_xcopy.c
@@ -822,8 +822,9 @@ out:
 	target_complete_cmd(ec_cmd, SAM_STAT_CHECK_CONDITION);
 }
 
-sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
+sense_reason_t target_do_xcopy(struct target_iostate *ios)
 {
+	struct se_cmd *se_cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *dev = se_cmd->se_dev;
 	struct xcopy_op *xop = NULL;
 	unsigned char *p = NULL, *seg_desc;
@@ -1006,8 +1007,9 @@ static sense_reason_t target_rcr_operating_parameters(struct se_cmd *se_cmd)
 	return TCM_NO_SENSE;
 }
 
-sense_reason_t target_do_receive_copy_results(struct se_cmd *se_cmd)
+sense_reason_t target_do_receive_copy_results(struct target_iostate *ios)
 {
+	struct se_cmd *se_cmd = container_of(ios, struct se_cmd, t_iostate);
 	unsigned char *cdb = &se_cmd->t_task_cdb[0];
 	int sa = (cdb[1] & 0x1f), list_id = cdb[2];
 	sense_reason_t rc = TCM_NO_SENSE;
diff --git a/drivers/target/target_core_xcopy.h b/drivers/target/target_core_xcopy.h
index 700a981..e4e057e 100644
--- a/drivers/target/target_core_xcopy.h
+++ b/drivers/target/target_core_xcopy.h
@@ -58,5 +58,5 @@ struct xcopy_op {
 
 extern int target_xcopy_setup_pt(void);
 extern void target_xcopy_release_pt(void);
-extern sense_reason_t target_do_xcopy(struct se_cmd *);
-extern sense_reason_t target_do_receive_copy_results(struct se_cmd *);
+extern sense_reason_t target_do_xcopy(struct target_iostate *);
+extern sense_reason_t target_do_receive_copy_results(struct target_iostate *);
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 2f6deb0..4a57477 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -61,7 +61,7 @@ void	target_complete_cmd_with_length(struct se_cmd *, u8, int);
 void	target_complete_ios(struct target_iostate *, u16);
 
 sense_reason_t	spc_parse_cdb(struct se_cmd *cmd, unsigned int *size);
-sense_reason_t	spc_emulate_report_luns(struct se_cmd *cmd);
+sense_reason_t	spc_emulate_report_luns(struct target_iostate *ios);
 sense_reason_t	spc_emulate_inquiry_std(struct se_cmd *, unsigned char *);
 sense_reason_t	spc_emulate_evpd_83(struct se_cmd *, unsigned char *);
 
@@ -91,7 +91,7 @@ sense_reason_t	transport_generic_map_mem_to_cmd(struct se_cmd *,
 
 bool	target_lun_is_rdonly(struct se_cmd *);
 sense_reason_t passthrough_parse_cdb(struct se_cmd *cmd,
-	sense_reason_t (*exec_cmd)(struct se_cmd *cmd));
+	sense_reason_t (*exec_cmd)(struct target_iostate *ios));
 
 bool target_sense_desc_format(struct se_device *dev);
 sector_t target_to_linux_sector(struct se_device *dev, sector_t lb);
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index bd4346b..9bd7559 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -508,7 +508,7 @@ struct se_cmd {
 	struct list_head	se_cmd_list;
 	struct completion	cmd_wait_comp;
 	const struct target_core_fabric_ops *se_tfo;
-	sense_reason_t		(*execute_cmd)(struct se_cmd *);
+	sense_reason_t		(*execute_cmd)(struct target_iostate *);
 	sense_reason_t (*transport_complete_callback)(struct se_cmd *, bool, int *);
 	void			*protocol_data;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 07/14] target/sbc: Convert sbc_ops->execute_rw to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (5 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 06/14] target: Convert se_cmd->execute_cmd to target_iostate Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 08/14] target/sbc: Convert sbc_dif_copy_prot " Nicholas A. Bellinger
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

Convert IBLOCK, FILEIO and RD to use target_iostate.

This includes converting sbc_ops->execute_rw() to accept a
function pointer callback:

    void (*t_comp_func)(struct target_iostate *, u16))

As well as a 'bool fua_write' flag for signaling forced
unit-access to IBLOCK and FILEIO backends.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_file.c    | 72 +++++++++++++++++++-----------------
 drivers/target/target_core_iblock.c  | 65 +++++++++++++++++---------------
 drivers/target/target_core_rd.c      | 22 ++++++-----
 drivers/target/target_core_sbc.c     | 11 ++++--
 include/target/target_core_backend.h |  5 ++-
 5 files changed, 96 insertions(+), 79 deletions(-)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index c865e45..bc82018 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -246,7 +246,7 @@ static void fd_free_device(struct se_device *dev)
 	call_rcu(&dev->rcu_head, fd_dev_call_rcu);
 }
 
-static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
+static int fd_do_rw(struct target_iostate *ios, struct file *fd,
 		    u32 block_size, struct scatterlist *sgl,
 		    u32 sgl_nents, u32 data_length, int is_write)
 {
@@ -254,7 +254,7 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
 	struct iov_iter iter;
 	struct bio_vec *bvec;
 	ssize_t len = 0;
-	loff_t pos = (cmd->t_iostate.t_task_lba * block_size);
+	loff_t pos = (ios->t_task_lba * block_size);
 	int ret = 0, i;
 
 	bvec = kcalloc(sgl_nents, sizeof(struct bio_vec), GFP_KERNEL);
@@ -508,23 +508,27 @@ fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 }
 
 static sense_reason_t
-fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
-	      enum dma_data_direction data_direction)
+fd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents,
+	      enum dma_data_direction data_direction, bool fua_write,
+	      void (*t_comp_func)(struct target_iostate *, u16))
 {
-	struct se_device *dev = cmd->se_dev;
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+	struct target_iomem *iomem = ios->iomem;
+	struct se_device *dev = ios->se_dev;
 	struct fd_dev *fd_dev = FD_DEV(dev);
 	struct file *file = fd_dev->fd_file;
 	struct file *pfile = fd_dev->fd_prot_file;
 	sense_reason_t rc;
 	int ret = 0;
+
 	/*
 	 * We are currently limited by the number of iovecs (2048) per
 	 * single vfs_[writev,readv] call.
 	 */
-	if (cmd->t_iostate.data_length > FD_MAX_BYTES) {
+	if (ios->data_length > FD_MAX_BYTES) {
 		pr_err("FILEIO: Not able to process I/O of %u bytes due to"
 		       "FD_MAX_BYTES: %u iovec count limitiation\n",
-			cmd->t_iostate.data_length, FD_MAX_BYTES);
+			ios->data_length, FD_MAX_BYTES);
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 	/*
@@ -532,63 +536,63 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	 * physical memory addresses to struct iovec virtual memory.
 	 */
 	if (data_direction == DMA_FROM_DEVICE) {
-		if (cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
-			ret = fd_do_rw(cmd, pfile, dev->prot_length,
-				       cmd->t_iomem.t_prot_sg,
-				       cmd->t_iomem.t_prot_nents,
-				       cmd->t_iostate.prot_length, 0);
+		if (ios->prot_type && dev->dev_attrib.pi_prot_type) {
+			ret = fd_do_rw(ios, pfile, dev->prot_length,
+				       iomem->t_prot_sg,
+				       iomem->t_prot_nents,
+				       ios->prot_length, 0);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 		}
 
-		ret = fd_do_rw(cmd, file, dev->dev_attrib.block_size,
-			       sgl, sgl_nents, cmd->t_iostate.data_length, 0);
+		ret = fd_do_rw(ios, file, dev->dev_attrib.block_size,
+			       sgl, sgl_nents, ios->data_length, 0);
 
-		if (ret > 0 && cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
-			u32 sectors = cmd->t_iostate.data_length >>
+		if (ret > 0 && ios->prot_type && dev->dev_attrib.pi_prot_type) {
+			u32 sectors = ios->data_length >>
 					ilog2(dev->dev_attrib.block_size);
 
-			rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors,
-					    0, cmd->t_iomem.t_prot_sg, 0);
+			rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors,
+					    0, iomem->t_prot_sg, 0);
 			if (rc)
 				return rc;
 		}
 	} else {
-		if (cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
-			u32 sectors = cmd->t_iostate.data_length >>
+		if (ios->prot_type && dev->dev_attrib.pi_prot_type) {
+			u32 sectors = ios->data_length >>
 					ilog2(dev->dev_attrib.block_size);
 
-			rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors,
-					    0, cmd->t_iomem.t_prot_sg, 0);
+			rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors,
+					    0, iomem->t_prot_sg, 0);
 			if (rc)
 				return rc;
 		}
 
-		ret = fd_do_rw(cmd, file, dev->dev_attrib.block_size,
-			       sgl, sgl_nents, cmd->t_iostate.data_length, 1);
+		ret = fd_do_rw(ios, file, dev->dev_attrib.block_size,
+			       sgl, sgl_nents, ios->data_length, 1);
 		/*
 		 * Perform implicit vfs_fsync_range() for fd_do_writev() ops
 		 * for SCSI WRITEs with Forced Unit Access (FUA) set.
 		 * Allow this to happen independent of WCE=0 setting.
 		 */
-		if (ret > 0 && (cmd->se_cmd_flags & SCF_FUA)) {
-			loff_t start = cmd->t_iostate.t_task_lba *
+		if (ret > 0 && fua_write) {
+			loff_t start = ios->t_task_lba *
 				dev->dev_attrib.block_size;
 			loff_t end;
 
-			if (cmd->t_iostate.data_length)
-				end = start + cmd->t_iostate.data_length - 1;
+			if (ios->data_length)
+				end = start + ios->data_length - 1;
 			else
 				end = LLONG_MAX;
 
 			vfs_fsync_range(fd_dev->fd_file, start, end, 1);
 		}
 
-		if (ret > 0 && cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
-			ret = fd_do_rw(cmd, pfile, dev->prot_length,
-				       cmd->t_iomem.t_prot_sg,
-				       cmd->t_iomem.t_prot_nents,
-				       cmd->t_iostate.prot_length, 1);
+		if (ret > 0 && ios->prot_type && dev->dev_attrib.pi_prot_type) {
+			ret = fd_do_rw(ios, pfile, dev->prot_length,
+				       iomem->t_prot_sg,
+				       iomem->t_prot_nents,
+				       ios->prot_length, 1);
 			if (ret < 0)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 		}
@@ -598,7 +602,7 @@ fd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
 	if (ret)
-		target_complete_cmd(cmd, SAM_STAT_GOOD);
+		ios->t_comp_func(ios, SAM_STAT_GOOD);
 	return 0;
 }
 
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 0f45973..8e90ec42 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -275,9 +275,9 @@ static unsigned long long iblock_emulate_read_cap_with_block_size(
 	return blocks_long;
 }
 
-static void iblock_complete_cmd(struct se_cmd *cmd)
+static void iblock_complete_cmd(struct target_iostate *ios)
 {
-	struct iblock_req *ibr = cmd->priv;
+	struct iblock_req *ibr = ios->priv;
 	u8 status;
 
 	if (!atomic_dec_and_test(&ibr->pending))
@@ -288,14 +288,16 @@ static void iblock_complete_cmd(struct se_cmd *cmd)
 	else
 		status = SAM_STAT_GOOD;
 
-	target_complete_cmd(cmd, status);
+	// XXX: ios status SAM completion translation
+	ios->t_comp_func(ios, status);
+
 	kfree(ibr);
 }
 
 static void iblock_bio_done(struct bio *bio)
 {
-	struct se_cmd *cmd = bio->bi_private;
-	struct iblock_req *ibr = cmd->priv;
+	struct target_iostate *ios = bio->bi_private;
+	struct iblock_req *ibr = ios->priv;
 
 	if (bio->bi_error) {
 		pr_err("bio error: %p,  err: %d\n", bio, bio->bi_error);
@@ -308,13 +310,15 @@ static void iblock_bio_done(struct bio *bio)
 
 	bio_put(bio);
 
-	iblock_complete_cmd(cmd);
+	iblock_complete_cmd(ios);
 }
 
+
+
 static struct bio *
-iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num)
+iblock_get_bio(struct target_iostate *ios, sector_t lba, u32 sg_num)
 {
-	struct iblock_dev *ib_dev = IBLOCK_DEV(cmd->se_dev);
+	struct iblock_dev *ib_dev = IBLOCK_DEV(ios->se_dev);
 	struct bio *bio;
 
 	/*
@@ -331,7 +335,7 @@ iblock_get_bio(struct se_cmd *cmd, sector_t lba, u32 sg_num)
 	}
 
 	bio->bi_bdev = ib_dev->ibd_bd;
-	bio->bi_private = cmd;
+	bio->bi_private = ios;
 	bio->bi_end_io = &iblock_bio_done;
 	bio->bi_iter.bi_sector = lba;
 
@@ -447,6 +451,7 @@ iblock_execute_write_same_direct(struct block_device *bdev, struct se_cmd *cmd)
 static sense_reason_t
 iblock_execute_write_same(struct se_cmd *cmd)
 {
+	struct target_iostate *ios = &cmd->t_iostate;
 	struct block_device *bdev = IBLOCK_DEV(cmd->se_dev)->ibd_bd;
 	struct iblock_req *ibr;
 	struct scatterlist *sg;
@@ -478,9 +483,9 @@ iblock_execute_write_same(struct se_cmd *cmd)
 	ibr = kzalloc(sizeof(struct iblock_req), GFP_KERNEL);
 	if (!ibr)
 		goto fail;
-	cmd->priv = ibr;
+	ios->priv = ibr;
 
-	bio = iblock_get_bio(cmd, block_lba, 1);
+	bio = iblock_get_bio(ios, block_lba, 1);
 	if (!bio)
 		goto fail_free_ibr;
 
@@ -493,7 +498,7 @@ iblock_execute_write_same(struct se_cmd *cmd)
 		while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
 				!= sg->length) {
 
-			bio = iblock_get_bio(cmd, block_lba, 1);
+			bio = iblock_get_bio(ios, block_lba, 1);
 			if (!bio)
 				goto fail_put_bios;
 
@@ -623,9 +628,10 @@ static ssize_t iblock_show_configfs_dev_params(struct se_device *dev, char *b)
 }
 
 static int
-iblock_alloc_bip(struct se_cmd *cmd, struct bio *bio)
+iblock_alloc_bip(struct target_iostate *ios, struct target_iomem *iomem,
+		 struct bio *bio)
 {
-	struct se_device *dev = cmd->se_dev;
+	struct se_device *dev = ios->se_dev;
 	struct blk_integrity *bi;
 	struct bio_integrity_payload *bip;
 	struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
@@ -638,20 +644,20 @@ iblock_alloc_bip(struct se_cmd *cmd, struct bio *bio)
 		return -ENODEV;
 	}
 
-	bip = bio_integrity_alloc(bio, GFP_NOIO, cmd->t_iomem.t_prot_nents);
+	bip = bio_integrity_alloc(bio, GFP_NOIO, iomem->t_prot_nents);
 	if (IS_ERR(bip)) {
 		pr_err("Unable to allocate bio_integrity_payload\n");
 		return PTR_ERR(bip);
 	}
 
-	bip->bip_iter.bi_size = (cmd->t_iostate.data_length / dev->dev_attrib.block_size) *
+	bip->bip_iter.bi_size = (ios->data_length / dev->dev_attrib.block_size) *
 			 dev->prot_length;
 	bip->bip_iter.bi_sector = bio->bi_iter.bi_sector;
 
 	pr_debug("IBLOCK BIP Size: %u Sector: %llu\n", bip->bip_iter.bi_size,
 		 (unsigned long long)bip->bip_iter.bi_sector);
 
-	for_each_sg(cmd->t_iomem.t_prot_sg, sg, cmd->t_iomem.t_prot_nents, i) {
+	for_each_sg(iomem->t_prot_sg, sg, iomem->t_prot_nents, i) {
 
 		rc = bio_integrity_add_page(bio, sg_page(sg), sg->length,
 					    sg->offset);
@@ -668,11 +674,12 @@ iblock_alloc_bip(struct se_cmd *cmd, struct bio *bio)
 }
 
 static sense_reason_t
-iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
-		  enum dma_data_direction data_direction)
+iblock_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents,
+		  enum dma_data_direction data_direction, bool fua_write,
+		  void (*t_comp_func)(struct target_iostate *ios, u16))
 {
-	struct se_device *dev = cmd->se_dev;
-	sector_t block_lba = target_to_linux_sector(dev, cmd->t_iostate.t_task_lba);
+	struct se_device *dev = ios->se_dev;
+	sector_t block_lba = target_to_linux_sector(dev, ios->t_task_lba);
 	struct iblock_req *ibr;
 	struct bio *bio, *bio_start;
 	struct bio_list list;
@@ -690,7 +697,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 		 * is not enabled, or if initiator set the Force Unit Access bit.
 		 */
 		if (test_bit(QUEUE_FLAG_FUA, &q->queue_flags)) {
-			if (cmd->se_cmd_flags & SCF_FUA)
+			if (fua_write)
 				rw = WRITE_FUA;
 			else if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags))
 				rw = WRITE_FUA;
@@ -706,15 +713,15 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	ibr = kzalloc(sizeof(struct iblock_req), GFP_KERNEL);
 	if (!ibr)
 		goto fail;
-	cmd->priv = ibr;
+	ios->priv = ibr;
 
 	if (!sgl_nents) {
 		atomic_set(&ibr->pending, 1);
-		iblock_complete_cmd(cmd);
+		iblock_complete_cmd(ios);
 		return 0;
 	}
 
-	bio = iblock_get_bio(cmd, block_lba, sgl_nents);
+	bio = iblock_get_bio(ios, block_lba, sgl_nents);
 	if (!bio)
 		goto fail_free_ibr;
 
@@ -738,7 +745,7 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 				bio_cnt = 0;
 			}
 
-			bio = iblock_get_bio(cmd, block_lba, sg_num);
+			bio = iblock_get_bio(ios, block_lba, sg_num);
 			if (!bio)
 				goto fail_put_bios;
 
@@ -752,14 +759,14 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 		sg_num--;
 	}
 
-	if (cmd->t_iostate.prot_type && dev->dev_attrib.pi_prot_type) {
-		int rc = iblock_alloc_bip(cmd, bio_start);
+	if (ios->prot_type && dev->dev_attrib.pi_prot_type) {
+		int rc = iblock_alloc_bip(ios, ios->iomem, bio_start);
 		if (rc)
 			goto fail_put_bios;
 	}
 
 	iblock_submit_bios(&list, rw);
-	iblock_complete_cmd(cmd);
+	iblock_complete_cmd(ios);
 	return 0;
 
 fail_put_bios:
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index 4edd3e0..df8bd58 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -435,10 +435,12 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
 }
 
 static sense_reason_t
-rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
-	      enum dma_data_direction data_direction)
+rd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents,
+	      enum dma_data_direction data_direction, bool fua_write,
+	      void (*t_comp_func)(struct target_iostate *, u16))
 {
-	struct se_device *se_dev = cmd->se_dev;
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+	struct se_device *se_dev = ios->se_dev;
 	struct rd_dev *dev = RD_DEV(se_dev);
 	struct rd_dev_sg_table *table;
 	struct scatterlist *rd_sg;
@@ -451,14 +453,14 @@ rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	sense_reason_t rc;
 
 	if (dev->rd_flags & RDF_NULLIO) {
-		target_complete_cmd(cmd, SAM_STAT_GOOD);
+		(*t_comp_func)(ios, SAM_STAT_GOOD);
 		return 0;
 	}
 
-	tmp = cmd->t_iostate.t_task_lba * se_dev->dev_attrib.block_size;
+	tmp = ios->t_task_lba * se_dev->dev_attrib.block_size;
 	rd_offset = do_div(tmp, PAGE_SIZE);
 	rd_page = tmp;
-	rd_size = cmd->t_iostate.data_length;
+	rd_size = ios->data_length;
 
 	table = rd_get_sg_table(dev, rd_page);
 	if (!table)
@@ -469,9 +471,9 @@ rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	pr_debug("RD[%u]: %s LBA: %llu, Size: %u Page: %u, Offset: %u\n",
 			dev->rd_dev_id,
 			data_direction == DMA_FROM_DEVICE ? "Read" : "Write",
-			cmd->t_iostate.t_task_lba, rd_size, rd_page, rd_offset);
+			ios->t_task_lba, rd_size, rd_page, rd_offset);
 
-	if (cmd->t_iostate.prot_type && se_dev->dev_attrib.pi_prot_type &&
+	if (ios->prot_type && se_dev->dev_attrib.pi_prot_type &&
 	    data_direction == DMA_TO_DEVICE) {
 		rc = rd_do_prot_rw(cmd, false);
 		if (rc)
@@ -539,14 +541,14 @@ rd_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
 	}
 	sg_miter_stop(&m);
 
-	if (cmd->t_iostate.prot_type && se_dev->dev_attrib.pi_prot_type &&
+	if (ios->prot_type && se_dev->dev_attrib.pi_prot_type &&
 	    data_direction == DMA_FROM_DEVICE) {
 		rc = rd_do_prot_rw(cmd, true);
 		if (rc)
 			return rc;
 	}
 
-	target_complete_cmd(cmd, SAM_STAT_GOOD);
+	(*t_comp_func)(ios, SAM_STAT_GOOD);
 	return 0;
 }
 
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 2095f78..e1288de 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -457,9 +457,10 @@ sbc_execute_rw(struct target_iostate *ios)
 {
 	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
+	bool fua_write = (cmd->se_cmd_flags & SCF_FUA);
 
-	return ops->execute_rw(cmd, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
-			       cmd->t_iostate.data_direction);
+	return ops->execute_rw(ios, cmd->t_iomem.t_data_sg, cmd->t_iomem.t_data_nents,
+			       cmd->t_iostate.data_direction, fua_write, &target_complete_ios);
 }
 
 static sense_reason_t sbc_execute_sync_cache(struct target_iostate *ios)
@@ -654,6 +655,7 @@ sbc_compare_and_write(struct target_iostate *ios)
 	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
 	struct se_device *dev = cmd->se_dev;
+	bool fua_write = (cmd->se_cmd_flags & SCF_FUA);
 	sense_reason_t ret;
 	int rc;
 	/*
@@ -673,8 +675,9 @@ sbc_compare_and_write(struct target_iostate *ios)
 	cmd->t_iostate.data_length = cmd->t_iostate.t_task_nolb *
 				     dev->dev_attrib.block_size;
 
-	ret = ops->execute_rw(cmd, cmd->t_iomem.t_bidi_data_sg,
-			      cmd->t_iomem.t_bidi_data_nents, DMA_FROM_DEVICE);
+	ret = ops->execute_rw(ios, cmd->t_iomem.t_bidi_data_sg,
+			      cmd->t_iomem.t_bidi_data_nents, DMA_FROM_DEVICE,
+			      fua_write, &target_complete_ios);
 	if (ret) {
 		cmd->transport_complete_callback = NULL;
 		up(&dev->caw_sem);
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 4a57477..ade90b7 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -45,8 +45,9 @@ struct target_backend_ops {
 };
 
 struct sbc_ops {
-	sense_reason_t (*execute_rw)(struct se_cmd *cmd, struct scatterlist *,
-				     u32, enum dma_data_direction);
+	sense_reason_t (*execute_rw)(struct target_iostate *ios, struct scatterlist *,
+				     u32, enum dma_data_direction, bool fua_write,
+				     void (*t_comp_func)(struct target_iostate *ios, u16));
 	sense_reason_t (*execute_sync_cache)(struct se_cmd *cmd);
 	sense_reason_t (*execute_write_same)(struct se_cmd *cmd);
 	sense_reason_t (*execute_unmap)(struct se_cmd *cmd,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 08/14] target/sbc: Convert sbc_dif_copy_prot to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (6 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 07/14] target/sbc: Convert sbc_ops->execute_rw " Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 09/14] target/file: Convert sbc_dif_verify " Nicholas A. Bellinger
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts sbc_dif_copy_prot() to use struct target_iomem
for existing T10-PI scatterlist memory and scatterlist count
dereferences.

Also convert the single external user in rd_do_prot_rw() code.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_rd.c      | 24 +++++++++++++-----------
 drivers/target/target_core_sbc.c     |  9 ++++-----
 include/target/target_core_backend.h |  4 ++--
 3 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index df8bd58..cbcb33e 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -398,19 +398,21 @@ static struct rd_dev_sg_table *rd_get_prot_table(struct rd_dev *rd_dev, u32 page
 	return NULL;
 }
 
-static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
+static sense_reason_t rd_do_prot_rw(struct target_iostate *ios, bool is_read)
 {
-	struct se_device *se_dev = cmd->se_dev;
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
+	struct se_device *se_dev = ios->se_dev;
+	struct target_iomem *iomem = ios->iomem;
 	struct rd_dev *dev = RD_DEV(se_dev);
 	struct rd_dev_sg_table *prot_table;
 	struct scatterlist *prot_sg;
-	u32 sectors = cmd->t_iostate.data_length / se_dev->dev_attrib.block_size;
+	u32 sectors = ios->data_length / se_dev->dev_attrib.block_size;
 	u32 prot_offset, prot_page;
 	u32 prot_npages __maybe_unused;
 	u64 tmp;
 	sense_reason_t rc = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
-	tmp = cmd->t_iostate.t_task_lba * se_dev->prot_length;
+	tmp = ios->t_task_lba * se_dev->prot_length;
 	prot_offset = do_div(tmp, PAGE_SIZE);
 	prot_page = tmp;
 
@@ -422,14 +424,15 @@ static sense_reason_t rd_do_prot_rw(struct se_cmd *cmd, bool is_read)
 					prot_table->page_start_offset];
 
 	if (is_read)
-		rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors, 0,
+		rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors, 0,
 				    prot_sg, prot_offset);
 	else
-		rc = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba, sectors, 0,
-				    cmd->t_iomem.t_prot_sg, 0);
+		rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors, 0,
+				    iomem->t_prot_sg, 0);
 
 	if (!rc)
-		sbc_dif_copy_prot(cmd, sectors, is_read, prot_sg, prot_offset);
+		sbc_dif_copy_prot(iomem, sectors, is_read, prot_sg, prot_offset,
+				  se_dev->prot_length);
 
 	return rc;
 }
@@ -439,7 +442,6 @@ rd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents
 	      enum dma_data_direction data_direction, bool fua_write,
 	      void (*t_comp_func)(struct target_iostate *, u16))
 {
-	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *se_dev = ios->se_dev;
 	struct rd_dev *dev = RD_DEV(se_dev);
 	struct rd_dev_sg_table *table;
@@ -475,7 +477,7 @@ rd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents
 
 	if (ios->prot_type && se_dev->dev_attrib.pi_prot_type &&
 	    data_direction == DMA_TO_DEVICE) {
-		rc = rd_do_prot_rw(cmd, false);
+		rc = rd_do_prot_rw(ios, false);
 		if (rc)
 			return rc;
 	}
@@ -543,7 +545,7 @@ rd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents
 
 	if (ios->prot_type && se_dev->dev_attrib.pi_prot_type &&
 	    data_direction == DMA_FROM_DEVICE) {
-		rc = rd_do_prot_rw(cmd, true);
+		rc = rd_do_prot_rw(ios, true);
 		if (rc)
 			return rc;
 	}
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index e1288de..e82b261 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -1381,10 +1381,9 @@ check_ref:
 	return 0;
 }
 
-void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
-		       struct scatterlist *sg, int sg_off)
+void sbc_dif_copy_prot(struct target_iomem *iomem, unsigned int sectors, bool read,
+		       struct scatterlist *sg, int sg_off, u32 prot_length)
 {
-	struct se_device *dev = cmd->se_dev;
 	struct scatterlist *psg;
 	void *paddr, *addr;
 	unsigned int i, len, left;
@@ -1393,9 +1392,9 @@ void sbc_dif_copy_prot(struct se_cmd *cmd, unsigned int sectors, bool read,
 	if (!sg)
 		return;
 
-	left = sectors * dev->prot_length;
+	left = sectors * prot_length;
 
-	for_each_sg(cmd->t_iomem.t_prot_sg, psg, cmd->t_iomem.t_prot_nents, i) {
+	for_each_sg(iomem->t_prot_sg, psg, iomem->t_prot_nents, i) {
 		unsigned int psg_len, copied = 0;
 
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index ade90b7..f2d593f 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -73,8 +73,8 @@ sector_t	sbc_get_write_same_sectors(struct se_cmd *cmd);
 void	sbc_dif_generate(struct se_cmd *);
 sense_reason_t	sbc_dif_verify(struct se_cmd *, sector_t, unsigned int,
 				     unsigned int, struct scatterlist *, int);
-void sbc_dif_copy_prot(struct se_cmd *, unsigned int, bool,
-		       struct scatterlist *, int);
+void sbc_dif_copy_prot(struct target_iomem *, unsigned int, bool,
+		       struct scatterlist *, int, u32);
 void	transport_set_vpd_proto_id(struct t10_vpd *, unsigned char *);
 int	transport_set_vpd_assoc(struct t10_vpd *, unsigned char *);
 int	transport_set_vpd_ident_type(struct t10_vpd *, unsigned char *);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 09/14] target/file: Convert sbc_dif_verify to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (7 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 08/14] target/sbc: Convert sbc_dif_copy_prot " Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 10/14] target/iblock: Fold iblock_req into target_iostate Nicholas A. Bellinger
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts sbc_dif_verify() and associated existing
T10-PI FILEIO driver to use target_iostate

Also add FIXMEs for target_*_prot_action for target_iostate +
target_iomem so this logic can eventually used by external
drivers doing TARGET_PROT_DOUT_INSERT + TARGET_PROT_DIN_STRIP
using software emulation.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_file.c      |  5 ++---
 drivers/target/target_core_rd.c        |  5 ++---
 drivers/target/target_core_sbc.c       | 21 +++++++++++----------
 drivers/target/target_core_transport.c | 15 +++++++++++----
 include/target/target_core_backend.h   |  2 +-
 5 files changed, 27 insertions(+), 21 deletions(-)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index bc82018..ed94969 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -512,7 +512,6 @@ fd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents
 	      enum dma_data_direction data_direction, bool fua_write,
 	      void (*t_comp_func)(struct target_iostate *, u16))
 {
-	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct target_iomem *iomem = ios->iomem;
 	struct se_device *dev = ios->se_dev;
 	struct fd_dev *fd_dev = FD_DEV(dev);
@@ -552,7 +551,7 @@ fd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents
 			u32 sectors = ios->data_length >>
 					ilog2(dev->dev_attrib.block_size);
 
-			rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors,
+			rc = sbc_dif_verify(ios, ios->t_task_lba, sectors,
 					    0, iomem->t_prot_sg, 0);
 			if (rc)
 				return rc;
@@ -562,7 +561,7 @@ fd_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_nents
 			u32 sectors = ios->data_length >>
 					ilog2(dev->dev_attrib.block_size);
 
-			rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors,
+			rc = sbc_dif_verify(ios, ios->t_task_lba, sectors,
 					    0, iomem->t_prot_sg, 0);
 			if (rc)
 				return rc;
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index cbcb33e..a38a37f 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -400,7 +400,6 @@ static struct rd_dev_sg_table *rd_get_prot_table(struct rd_dev *rd_dev, u32 page
 
 static sense_reason_t rd_do_prot_rw(struct target_iostate *ios, bool is_read)
 {
-	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct se_device *se_dev = ios->se_dev;
 	struct target_iomem *iomem = ios->iomem;
 	struct rd_dev *dev = RD_DEV(se_dev);
@@ -424,10 +423,10 @@ static sense_reason_t rd_do_prot_rw(struct target_iostate *ios, bool is_read)
 					prot_table->page_start_offset];
 
 	if (is_read)
-		rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors, 0,
+		rc = sbc_dif_verify(ios, ios->t_task_lba, sectors, 0,
 				    prot_sg, prot_offset);
 	else
-		rc = sbc_dif_verify(cmd, ios->t_task_lba, sectors, 0,
+		rc = sbc_dif_verify(ios, ios->t_task_lba, sectors, 0,
 				    iomem->t_prot_sg, 0);
 
 	if (!rc)
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index e82b261..649a3f2 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -1341,12 +1341,12 @@ sbc_dif_generate(struct se_cmd *cmd)
 }
 
 static sense_reason_t
-sbc_dif_v1_verify(struct se_cmd *cmd, struct t10_pi_tuple *sdt,
+sbc_dif_v1_verify(struct target_iostate *ios, struct t10_pi_tuple *sdt,
 		  __u16 crc, sector_t sector, unsigned int ei_lba)
 {
 	__be16 csum;
 
-	if (!(cmd->t_iostate.prot_checks & TARGET_DIF_CHECK_GUARD))
+	if (!(ios->prot_checks & TARGET_DIF_CHECK_GUARD))
 		goto check_ref;
 
 	csum = cpu_to_be16(crc);
@@ -1359,10 +1359,10 @@ sbc_dif_v1_verify(struct se_cmd *cmd, struct t10_pi_tuple *sdt,
 	}
 
 check_ref:
-	if (!(cmd->t_iostate.prot_checks & TARGET_DIF_CHECK_REFTAG))
+	if (!(ios->prot_checks & TARGET_DIF_CHECK_REFTAG))
 		return 0;
 
-	if (cmd->t_iostate.prot_type == TARGET_DIF_TYPE1_PROT &&
+	if (ios->prot_type == TARGET_DIF_TYPE1_PROT &&
 	    be32_to_cpu(sdt->ref_tag) != (sector & 0xffffffff)) {
 		pr_err("DIFv1 Type 1 reference failed on sector: %llu tag: 0x%08x"
 		       " sector MSB: 0x%08x\n", (unsigned long long)sector,
@@ -1370,7 +1370,7 @@ check_ref:
 		return TCM_LOGICAL_BLOCK_REF_TAG_CHECK_FAILED;
 	}
 
-	if (cmd->t_iostate.prot_type == TARGET_DIF_TYPE2_PROT &&
+	if (ios->prot_type == TARGET_DIF_TYPE2_PROT &&
 	    be32_to_cpu(sdt->ref_tag) != ei_lba) {
 		pr_err("DIFv1 Type 2 reference failed on sector: %llu tag: 0x%08x"
 		       " ei_lba: 0x%08x\n", (unsigned long long)sector,
@@ -1426,12 +1426,13 @@ void sbc_dif_copy_prot(struct target_iomem *iomem, unsigned int sectors, bool re
 EXPORT_SYMBOL(sbc_dif_copy_prot);
 
 sense_reason_t
-sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
+sbc_dif_verify(struct target_iostate *ios, sector_t start, unsigned int sectors,
 	       unsigned int ei_lba, struct scatterlist *psg, int psg_off)
 {
-	struct se_device *dev = cmd->se_dev;
+	struct target_iomem *iomem = ios->iomem;
+	struct se_device *dev = ios->se_dev;
 	struct t10_pi_tuple *sdt;
-	struct scatterlist *dsg = cmd->t_iomem.t_data_sg;
+	struct scatterlist *dsg = iomem->t_data_sg;
 	sector_t sector = start;
 	void *daddr, *paddr;
 	int i;
@@ -1488,11 +1489,11 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 				dsg_off += block_size;
 			}
 
-			rc = sbc_dif_v1_verify(cmd, sdt, crc, sector, ei_lba);
+			rc = sbc_dif_v1_verify(ios, sdt, crc, sector, ei_lba);
 			if (rc) {
 				kunmap_atomic(daddr - dsg->offset);
 				kunmap_atomic(paddr - psg->offset);
-				cmd->t_iostate.bad_sector = sector;
+				ios->bad_sector = sector;
 				return rc;
 			}
 next:
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index b6a3543..d588759 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1815,8 +1815,11 @@ err:
 	transport_generic_request_failure(cmd, ret);
 }
 
+// XXX: Convert target_write_prot_action to target_iostate
 static int target_write_prot_action(struct se_cmd *cmd)
 {
+	struct target_iostate *ios = &cmd->t_iostate;
+	struct target_iomem *iomem = ios->iomem;
 	u32 sectors;
 	/*
 	 * Perform WRITE_INSERT of PI using software emulation when backend
@@ -1833,8 +1836,8 @@ static int target_write_prot_action(struct se_cmd *cmd)
 			break;
 
 		sectors = cmd->t_iostate.data_length >> ilog2(cmd->se_dev->dev_attrib.block_size);
-		cmd->pi_err = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba,
-					     sectors, 0, cmd->t_iomem.t_prot_sg, 0);
+		cmd->pi_err = sbc_dif_verify(ios, ios->t_task_lba,
+					     sectors, 0, iomem->t_prot_sg, 0);
 		if (unlikely(cmd->pi_err)) {
 			spin_lock_irq(&cmd->t_state_lock);
 			cmd->transport_state &= ~(CMD_T_BUSY|CMD_T_SENT);
@@ -2055,17 +2058,21 @@ static void transport_handle_queue_full(
 	schedule_work(&cmd->se_dev->qf_work_queue);
 }
 
+// XXX: Convert target_read_prot_action to target_iostate
 static bool target_read_prot_action(struct se_cmd *cmd)
 {
+	struct target_iostate *ios = &cmd->t_iostate;
+	struct target_iomem *iomem = ios->iomem;
+
 	switch (cmd->t_iostate.prot_op) {
 	case TARGET_PROT_DIN_STRIP:
 		if (!(cmd->se_sess->sup_prot_ops & TARGET_PROT_DIN_STRIP)) {
 			u32 sectors = cmd->t_iostate.data_length >>
 				  ilog2(cmd->se_dev->dev_attrib.block_size);
 
-			cmd->pi_err = sbc_dif_verify(cmd, cmd->t_iostate.t_task_lba,
+			cmd->pi_err = sbc_dif_verify(ios, ios->t_task_lba,
 						     sectors, 0,
-						     cmd->t_iomem.t_prot_sg, 0);
+						     iomem->t_prot_sg, 0);
 			if (cmd->pi_err)
 				return true;
 		}
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index f2d593f..5859ea5 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -71,7 +71,7 @@ u32	sbc_get_device_rev(struct se_device *dev);
 u32	sbc_get_device_type(struct se_device *dev);
 sector_t	sbc_get_write_same_sectors(struct se_cmd *cmd);
 void	sbc_dif_generate(struct se_cmd *);
-sense_reason_t	sbc_dif_verify(struct se_cmd *, sector_t, unsigned int,
+sense_reason_t	sbc_dif_verify(struct target_iostate *, sector_t, unsigned int,
 				     unsigned int, struct scatterlist *, int);
 void sbc_dif_copy_prot(struct target_iomem *, unsigned int, bool,
 		       struct scatterlist *, int, u32);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 10/14] target/iblock: Fold iblock_req into target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (8 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 09/14] target/file: Convert sbc_dif_verify " Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 11/14] target/sbc: Convert sbc_ops->execute_sync_cache to target_iostate Nicholas A. Bellinger
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch folks the two existing iblock_req members
into target_iostate, and updates associated sbc_ops
iblock_execute_rw() iblock_execute_write_same()
callbacks.

Also, go ahead and drop target_iostate->priv now
that it's unused.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_iblock.c | 40 ++++++++++---------------------------
 drivers/target/target_core_iblock.h |  5 -----
 include/target/target_core_base.h   |  5 ++++-
 3 files changed, 14 insertions(+), 36 deletions(-)

diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 8e90ec42..daf052d 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -277,34 +277,30 @@ static unsigned long long iblock_emulate_read_cap_with_block_size(
 
 static void iblock_complete_cmd(struct target_iostate *ios)
 {
-	struct iblock_req *ibr = ios->priv;
 	u8 status;
 
-	if (!atomic_dec_and_test(&ibr->pending))
+	if (!atomic_dec_and_test(&ios->backend_pending))
 		return;
 
-	if (atomic_read(&ibr->ib_bio_err_cnt))
+	if (atomic_read(&ios->backend_err_cnt))
 		status = SAM_STAT_CHECK_CONDITION;
 	else
 		status = SAM_STAT_GOOD;
 
 	// XXX: ios status SAM completion translation
 	ios->t_comp_func(ios, status);
-
-	kfree(ibr);
 }
 
 static void iblock_bio_done(struct bio *bio)
 {
 	struct target_iostate *ios = bio->bi_private;
-	struct iblock_req *ibr = ios->priv;
 
 	if (bio->bi_error) {
 		pr_err("bio error: %p,  err: %d\n", bio, bio->bi_error);
 		/*
 		 * Bump the ib_bio_err_cnt and release bio.
 		 */
-		atomic_inc(&ibr->ib_bio_err_cnt);
+		atomic_inc(&ios->backend_err_cnt);
 		smp_mb__after_atomic();
 	}
 
@@ -453,7 +449,6 @@ iblock_execute_write_same(struct se_cmd *cmd)
 {
 	struct target_iostate *ios = &cmd->t_iostate;
 	struct block_device *bdev = IBLOCK_DEV(cmd->se_dev)->ibd_bd;
-	struct iblock_req *ibr;
 	struct scatterlist *sg;
 	struct bio *bio;
 	struct bio_list list;
@@ -480,19 +475,14 @@ iblock_execute_write_same(struct se_cmd *cmd)
 	if (bdev_write_same(bdev))
 		return iblock_execute_write_same_direct(bdev, cmd);
 
-	ibr = kzalloc(sizeof(struct iblock_req), GFP_KERNEL);
-	if (!ibr)
-		goto fail;
-	ios->priv = ibr;
-
 	bio = iblock_get_bio(ios, block_lba, 1);
 	if (!bio)
-		goto fail_free_ibr;
+		goto fail;
 
 	bio_list_init(&list);
 	bio_list_add(&list, bio);
 
-	atomic_set(&ibr->pending, 1);
+	atomic_set(&ios->backend_pending, 1);
 
 	while (sectors) {
 		while (bio_add_page(bio, sg_page(sg), sg->length, sg->offset)
@@ -502,7 +492,7 @@ iblock_execute_write_same(struct se_cmd *cmd)
 			if (!bio)
 				goto fail_put_bios;
 
-			atomic_inc(&ibr->pending);
+			atomic_inc(&ios->backend_pending);
 			bio_list_add(&list, bio);
 		}
 
@@ -517,8 +507,6 @@ iblock_execute_write_same(struct se_cmd *cmd)
 fail_put_bios:
 	while ((bio = bio_list_pop(&list)))
 		bio_put(bio);
-fail_free_ibr:
-	kfree(ibr);
 fail:
 	return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 }
@@ -680,7 +668,6 @@ iblock_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_n
 {
 	struct se_device *dev = ios->se_dev;
 	sector_t block_lba = target_to_linux_sector(dev, ios->t_task_lba);
-	struct iblock_req *ibr;
 	struct bio *bio, *bio_start;
 	struct bio_list list;
 	struct scatterlist *sg;
@@ -710,26 +697,21 @@ iblock_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_n
 		rw = READ;
 	}
 
-	ibr = kzalloc(sizeof(struct iblock_req), GFP_KERNEL);
-	if (!ibr)
-		goto fail;
-	ios->priv = ibr;
-
 	if (!sgl_nents) {
-		atomic_set(&ibr->pending, 1);
+		atomic_set(&ios->backend_pending, 1);
 		iblock_complete_cmd(ios);
 		return 0;
 	}
 
 	bio = iblock_get_bio(ios, block_lba, sgl_nents);
 	if (!bio)
-		goto fail_free_ibr;
+		goto fail;
 
 	bio_start = bio;
 	bio_list_init(&list);
 	bio_list_add(&list, bio);
 
-	atomic_set(&ibr->pending, 2);
+	atomic_set(&ios->backend_pending, 2);
 	bio_cnt = 1;
 
 	for_each_sg(sgl, sg, sgl_nents, i) {
@@ -749,7 +731,7 @@ iblock_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_n
 			if (!bio)
 				goto fail_put_bios;
 
-			atomic_inc(&ibr->pending);
+			atomic_inc(&ios->backend_pending);
 			bio_list_add(&list, bio);
 			bio_cnt++;
 		}
@@ -772,8 +754,6 @@ iblock_execute_rw(struct target_iostate *ios, struct scatterlist *sgl, u32 sgl_n
 fail_put_bios:
 	while ((bio = bio_list_pop(&list)))
 		bio_put(bio);
-fail_free_ibr:
-	kfree(ibr);
 fail:
 	return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 }
diff --git a/drivers/target/target_core_iblock.h b/drivers/target/target_core_iblock.h
index 01c2afd..ef7f91c 100644
--- a/drivers/target/target_core_iblock.h
+++ b/drivers/target/target_core_iblock.h
@@ -6,11 +6,6 @@
 #define IBLOCK_MAX_CDBS		16
 #define IBLOCK_LBA_SHIFT	9
 
-struct iblock_req {
-	atomic_t pending;
-	atomic_t ib_bio_err_cnt;
-} ____cacheline_aligned;
-
 #define IBDF_HAS_UDEV_PATH		0x01
 
 struct iblock_dev {
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 9bd7559..60a180f 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -469,7 +469,10 @@ struct target_iostate {
 	struct target_iomem	*iomem;
 	struct se_device	*se_dev;
 	void			(*t_comp_func)(struct target_iostate *, u16);
-	void			*priv;
+
+	/* Used by IBLOCK for BIO submission + completion */
+	atomic_t		backend_pending;
+	atomic_t		backend_err_cnt;
 };
 
 struct se_cmd {
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 11/14] target/sbc: Convert sbc_ops->execute_sync_cache to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (9 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 10/14] target/iblock: Fold iblock_req into target_iostate Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 12/14] target/sbc: Convert sbc_ops->execute_write_same " Nicholas A. Bellinger
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch convert IBLOCK + FILEIO for sbc_ops->execute_sync_cache()
to accept struct target_iostate, and avoid backend driver sync_cache
SCSI CDB decoding for immediate as reported by HCH.

Reported-by: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@fb.com>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_file.c    | 19 +++++++++----------
 drivers/target/target_core_iblock.c  | 17 ++++++++---------
 drivers/target/target_core_sbc.c     |  6 ++++--
 include/target/target_core_backend.h |  2 +-
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index ed94969..6fc1099 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -309,11 +309,10 @@ static int fd_do_rw(struct target_iostate *ios, struct file *fd,
 }
 
 static sense_reason_t
-fd_execute_sync_cache(struct se_cmd *cmd)
+fd_execute_sync_cache(struct target_iostate *ios, bool immed)
 {
-	struct se_device *dev = cmd->se_dev;
+	struct se_device *dev = ios->se_dev;
 	struct fd_dev *fd_dev = FD_DEV(dev);
-	int immed = (cmd->t_task_cdb[1] & 0x2);
 	loff_t start, end;
 	int ret;
 
@@ -322,18 +321,18 @@ fd_execute_sync_cache(struct se_cmd *cmd)
 	 * for this SYNCHRONIZE_CACHE op
 	 */
 	if (immed)
-		target_complete_cmd(cmd, SAM_STAT_GOOD);
+		ios->t_comp_func(ios, SAM_STAT_GOOD);
 
 	/*
 	 * Determine if we will be flushing the entire device.
 	 */
-	if (cmd->t_iostate.t_task_lba == 0 && cmd->t_iostate.data_length == 0) {
+	if (ios->t_task_lba == 0 && ios->data_length == 0) {
 		start = 0;
 		end = LLONG_MAX;
 	} else {
-		start = cmd->t_iostate.t_task_lba * dev->dev_attrib.block_size;
-		if (cmd->t_iostate.data_length)
-			end = start + cmd->t_iostate.data_length - 1;
+		start = ios->t_task_lba * dev->dev_attrib.block_size;
+		if (ios->data_length)
+			end = start + ios->data_length - 1;
 		else
 			end = LLONG_MAX;
 	}
@@ -346,9 +345,9 @@ fd_execute_sync_cache(struct se_cmd *cmd)
 		return 0;
 
 	if (ret)
-		target_complete_cmd(cmd, SAM_STAT_CHECK_CONDITION);
+		ios->t_comp_func(ios, SAM_STAT_CHECK_CONDITION);
 	else
-		target_complete_cmd(cmd, SAM_STAT_GOOD);
+		ios->t_comp_func(ios, SAM_STAT_GOOD);
 
 	return 0;
 }
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index daf052d..931dd7d 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -351,16 +351,16 @@ static void iblock_submit_bios(struct bio_list *list, int rw)
 
 static void iblock_end_io_flush(struct bio *bio)
 {
-	struct se_cmd *cmd = bio->bi_private;
+	struct target_iostate *ios = bio->bi_private;
 
 	if (bio->bi_error)
 		pr_err("IBLOCK: cache flush failed: %d\n", bio->bi_error);
 
-	if (cmd) {
+	if (ios) {
 		if (bio->bi_error)
-			target_complete_cmd(cmd, SAM_STAT_CHECK_CONDITION);
+			ios->t_comp_func(ios, SAM_STAT_CHECK_CONDITION);
 		else
-			target_complete_cmd(cmd, SAM_STAT_GOOD);
+			ios->t_comp_func(ios, SAM_STAT_GOOD);
 	}
 
 	bio_put(bio);
@@ -371,10 +371,9 @@ static void iblock_end_io_flush(struct bio *bio)
  * always flush the whole cache.
  */
 static sense_reason_t
-iblock_execute_sync_cache(struct se_cmd *cmd)
+iblock_execute_sync_cache(struct target_iostate *ios, bool immed)
 {
-	struct iblock_dev *ib_dev = IBLOCK_DEV(cmd->se_dev);
-	int immed = (cmd->t_task_cdb[1] & 0x2);
+	struct iblock_dev *ib_dev = IBLOCK_DEV(ios->se_dev);
 	struct bio *bio;
 
 	/*
@@ -382,13 +381,13 @@ iblock_execute_sync_cache(struct se_cmd *cmd)
 	 * for this SYNCHRONIZE_CACHE op.
 	 */
 	if (immed)
-		target_complete_cmd(cmd, SAM_STAT_GOOD);
+		ios->t_comp_func(ios, SAM_STAT_GOOD);
 
 	bio = bio_alloc(GFP_KERNEL, 0);
 	bio->bi_end_io = iblock_end_io_flush;
 	bio->bi_bdev = ib_dev->ibd_bd;
 	if (!immed)
-		bio->bi_private = cmd;
+		bio->bi_private = ios;
 	submit_bio(WRITE_FLUSH, bio);
 	return 0;
 }
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 649a3f2..be8dd46 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -463,12 +463,14 @@ sbc_execute_rw(struct target_iostate *ios)
 			       cmd->t_iostate.data_direction, fua_write, &target_complete_ios);
 }
 
-static sense_reason_t sbc_execute_sync_cache(struct target_iostate *ios)
+static sense_reason_t
+sbc_execute_sync_cache(struct target_iostate *ios)
 {
 	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
+	bool immed = (cmd->t_task_cdb[1] & 0x2);
 
-	return ops->execute_sync_cache(cmd);
+	return ops->execute_sync_cache(ios, immed);
 }
 
 static sense_reason_t compare_and_write_post(struct se_cmd *cmd, bool success,
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 5859ea5..47fd1fc 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -48,7 +48,7 @@ struct sbc_ops {
 	sense_reason_t (*execute_rw)(struct target_iostate *ios, struct scatterlist *,
 				     u32, enum dma_data_direction, bool fua_write,
 				     void (*t_comp_func)(struct target_iostate *ios, u16));
-	sense_reason_t (*execute_sync_cache)(struct se_cmd *cmd);
+	sense_reason_t (*execute_sync_cache)(struct target_iostate *ios, bool immed);
 	sense_reason_t (*execute_write_same)(struct se_cmd *cmd);
 	sense_reason_t (*execute_unmap)(struct se_cmd *cmd,
 				sector_t lba, sector_t nolb);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 12/14] target/sbc: Convert sbc_ops->execute_write_same to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (10 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 11/14] target/sbc: Convert sbc_ops->execute_sync_cache to target_iostate Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 13/14] target/sbc: Convert sbc_ops->execute_unmap " Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 14/14] target: Make sbc_ops accessable via target_backend_ops Nicholas A. Bellinger
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts IBLOCK + FILEIO for sbc_ops->execute_write_same()
to accept struct target_iostate, and pass a new get_sectors() function
pointer used to decode a command set specific number of logical blocks
(NoLB).

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_file.c    | 32 ++++++++++++------------
 drivers/target/target_core_iblock.c  | 47 ++++++++++++++++++------------------
 drivers/target/target_core_sbc.c     |  9 ++++---
 include/target/target_core_backend.h |  4 +--
 4 files changed, 48 insertions(+), 44 deletions(-)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 6fc1099..8ddd561 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -353,34 +353,36 @@ fd_execute_sync_cache(struct target_iostate *ios, bool immed)
 }
 
 static sense_reason_t
-fd_execute_write_same(struct se_cmd *cmd)
+fd_execute_write_same(struct target_iostate *ios,
+		      sector_t (*get_sectors)(struct target_iostate *))
 {
-	struct se_device *se_dev = cmd->se_dev;
+	struct target_iomem *iomem = ios->iomem;
+	struct se_device *se_dev = ios->se_dev;
 	struct fd_dev *fd_dev = FD_DEV(se_dev);
-	loff_t pos = cmd->t_iostate.t_task_lba * se_dev->dev_attrib.block_size;
-	sector_t nolb = sbc_get_write_same_sectors(cmd);
+	loff_t pos = ios->t_task_lba * se_dev->dev_attrib.block_size;
+	sector_t nolb = get_sectors(ios);
 	struct iov_iter iter;
 	struct bio_vec *bvec;
 	unsigned int len = 0, i;
 	ssize_t ret;
 
 	if (!nolb) {
-		target_complete_cmd(cmd, SAM_STAT_GOOD);
+		ios->t_comp_func(ios, SAM_STAT_GOOD);
 		return 0;
 	}
-	if (cmd->t_iostate.prot_op) {
+	if (ios->prot_op) {
 		pr_err("WRITE_SAME: Protection information with FILEIO"
 		       " backends not supported\n");
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 
-	if (cmd->t_iomem.t_data_nents > 1 ||
-	    cmd->t_iomem.t_data_sg[0].length != cmd->se_dev->dev_attrib.block_size) {
+	if (iomem->t_data_nents > 1 ||
+	    iomem->t_data_sg[0].length != se_dev->dev_attrib.block_size) {
 		pr_err("WRITE_SAME: Illegal SGL t_data_nents: %u length: %u"
 			" block_size: %u\n",
-			cmd->t_iomem.t_data_nents,
-			cmd->t_iomem.t_data_sg[0].length,
-			cmd->se_dev->dev_attrib.block_size);
+			iomem->t_data_nents,
+			iomem->t_data_sg[0].length,
+			se_dev->dev_attrib.block_size);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
@@ -389,9 +391,9 @@ fd_execute_write_same(struct se_cmd *cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
 	for (i = 0; i < nolb; i++) {
-		bvec[i].bv_page = sg_page(&cmd->t_iomem.t_data_sg[0]);
-		bvec[i].bv_len = cmd->t_iomem.t_data_sg[0].length;
-		bvec[i].bv_offset = cmd->t_iomem.t_data_sg[0].offset;
+		bvec[i].bv_page = sg_page(&iomem->t_data_sg[0]);
+		bvec[i].bv_len = iomem->t_data_sg[0].length;
+		bvec[i].bv_offset = iomem->t_data_sg[0].offset;
 
 		len += se_dev->dev_attrib.block_size;
 	}
@@ -405,7 +407,7 @@ fd_execute_write_same(struct se_cmd *cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 
-	target_complete_cmd(cmd, SAM_STAT_GOOD);
+	ios->t_comp_func(ios, SAM_STAT_GOOD);
 	return 0;
 }
 
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 931dd7d..9623198 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -412,10 +412,11 @@ iblock_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 }
 
 static sense_reason_t
-iblock_execute_write_same_direct(struct block_device *bdev, struct se_cmd *cmd)
+iblock_execute_write_same_direct(struct block_device *bdev, struct target_iostate *ios,
+				 struct target_iomem *iomem, sector_t num_blocks)
 {
-	struct se_device *dev = cmd->se_dev;
-	struct scatterlist *sg = &cmd->t_iomem.t_data_sg[0];
+	struct se_device *dev = ios->se_dev;
+	struct scatterlist *sg = &iomem->t_data_sg[0];
 	struct page *page = NULL;
 	int ret;
 
@@ -423,56 +424,56 @@ iblock_execute_write_same_direct(struct block_device *bdev, struct se_cmd *cmd)
 		page = alloc_page(GFP_KERNEL);
 		if (!page)
 			return TCM_OUT_OF_RESOURCES;
-		sg_copy_to_buffer(sg, cmd->t_iomem.t_data_nents,
+		sg_copy_to_buffer(sg, iomem->t_data_nents,
 				  page_address(page),
 				  dev->dev_attrib.block_size);
 	}
 
 	ret = blkdev_issue_write_same(bdev,
-				target_to_linux_sector(dev,
-					cmd->t_iostate.t_task_lba),
-				target_to_linux_sector(dev,
-					sbc_get_write_same_sectors(cmd)),
+				target_to_linux_sector(dev, ios->t_task_lba),
+				target_to_linux_sector(dev, num_blocks),
 				GFP_KERNEL, page ? page : sg_page(sg));
 	if (page)
 		__free_page(page);
 	if (ret)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 
-	target_complete_cmd(cmd, GOOD);
+	ios->t_comp_func(ios, GOOD);
 	return 0;
 }
 
 static sense_reason_t
-iblock_execute_write_same(struct se_cmd *cmd)
+iblock_execute_write_same(struct target_iostate *ios,
+			  sector_t (*get_sectors)(struct target_iostate *))
 {
-	struct target_iostate *ios = &cmd->t_iostate;
-	struct block_device *bdev = IBLOCK_DEV(cmd->se_dev)->ibd_bd;
+	struct target_iomem *iomem = ios->iomem;
+	struct block_device *bdev = IBLOCK_DEV(ios->se_dev)->ibd_bd;
 	struct scatterlist *sg;
 	struct bio *bio;
 	struct bio_list list;
-	struct se_device *dev = cmd->se_dev;
-	sector_t block_lba = target_to_linux_sector(dev, cmd->t_iostate.t_task_lba);
-	sector_t sectors = target_to_linux_sector(dev,
-					sbc_get_write_same_sectors(cmd));
+	struct se_device *dev = ios->se_dev;
+	sector_t block_lba = target_to_linux_sector(dev, ios->t_task_lba);
+	sector_t num_blocks = get_sectors(ios);
+	sector_t sectors = target_to_linux_sector(dev, num_blocks);
 
-	if (cmd->t_iostate.prot_op) {
+	if (ios->prot_op) {
 		pr_err("WRITE_SAME: Protection information with IBLOCK"
 		       " backends not supported\n");
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
-	sg = &cmd->t_iomem.t_data_sg[0];
+	sg = &iomem->t_data_sg[0];
 
-	if (cmd->t_iomem.t_data_nents > 1 ||
-	    sg->length != cmd->se_dev->dev_attrib.block_size) {
+	if (iomem->t_data_nents > 1 ||
+	    sg->length != dev->dev_attrib.block_size) {
 		pr_err("WRITE_SAME: Illegal SGL t_data_nents: %u length: %u"
-			" block_size: %u\n", cmd->t_iomem.t_data_nents, sg->length,
-			cmd->se_dev->dev_attrib.block_size);
+			" block_size: %u\n", iomem->t_data_nents, sg->length,
+			dev->dev_attrib.block_size);
 		return TCM_INVALID_CDB_FIELD;
 	}
 
 	if (bdev_write_same(bdev))
-		return iblock_execute_write_same_direct(bdev, cmd);
+		return iblock_execute_write_same_direct(bdev, ios, iomem,
+							num_blocks);
 
 	bio = iblock_get_bio(ios, block_lba, 1);
 	if (!bio)
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index be8dd46..a92f169 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -197,8 +197,9 @@ sbc_emulate_startstop(struct target_iostate *ios)
 	return 0;
 }
 
-sector_t sbc_get_write_same_sectors(struct se_cmd *cmd)
+sector_t sbc_get_write_same_sectors(struct target_iostate *ios)
 {
+	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	u32 num_blocks;
 
 	if (cmd->t_task_cdb[0] == WRITE_SAME)
@@ -225,7 +226,7 @@ sbc_execute_write_same_unmap(struct target_iostate *ios)
 {
 	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
-	sector_t nolb = sbc_get_write_same_sectors(cmd);
+	sector_t nolb = sbc_get_write_same_sectors(ios);
 	sense_reason_t ret;
 
 	if (nolb) {
@@ -329,7 +330,7 @@ static sense_reason_t sbc_execute_write_same(struct target_iostate *ios)
 	struct se_cmd *cmd = container_of(ios, struct se_cmd, t_iostate);
 	struct sbc_ops *ops = cmd->protocol_data;
 
-	return ops->execute_write_same(cmd);
+	return ops->execute_write_same(ios, &sbc_get_write_same_sectors);
 }
 
 static sense_reason_t
@@ -337,7 +338,7 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
 {
 	struct se_device *dev = cmd->se_dev;
 	sector_t end_lba = dev->transport->get_blocks(dev) + 1;
-	unsigned int sectors = sbc_get_write_same_sectors(cmd);
+	unsigned int sectors = sbc_get_write_same_sectors(&cmd->t_iostate);
 	sense_reason_t ret;
 
 	if ((flags[0] & 0x04) || (flags[0] & 0x02)) {
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 47fd1fc..f845ff0 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -49,7 +49,8 @@ struct sbc_ops {
 				     u32, enum dma_data_direction, bool fua_write,
 				     void (*t_comp_func)(struct target_iostate *ios, u16));
 	sense_reason_t (*execute_sync_cache)(struct target_iostate *ios, bool immed);
-	sense_reason_t (*execute_write_same)(struct se_cmd *cmd);
+	sense_reason_t (*execute_write_same)(struct target_iostate *ios,
+					sector_t (*get_sectors)(struct target_iostate *));
 	sense_reason_t (*execute_unmap)(struct se_cmd *cmd,
 				sector_t lba, sector_t nolb);
 };
@@ -69,7 +70,6 @@ sense_reason_t	spc_emulate_evpd_83(struct se_cmd *, unsigned char *);
 sense_reason_t	sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops);
 u32	sbc_get_device_rev(struct se_device *dev);
 u32	sbc_get_device_type(struct se_device *dev);
-sector_t	sbc_get_write_same_sectors(struct se_cmd *cmd);
 void	sbc_dif_generate(struct se_cmd *);
 sense_reason_t	sbc_dif_verify(struct target_iostate *, sector_t, unsigned int,
 				     unsigned int, struct scatterlist *, int);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 13/14] target/sbc: Convert sbc_ops->execute_unmap to target_iostate
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (11 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 12/14] target/sbc: Convert sbc_ops->execute_write_same " Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  2016-06-01 21:48 ` [PATCH 14/14] target: Make sbc_ops accessable via target_backend_ops Nicholas A. Bellinger
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch converts IBLOCK + FILEIO for sbc_ops->execute_unmap()
to accept struct target_iostate.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_file.c    | 21 ++++++++++-----------
 drivers/target/target_core_iblock.c  |  8 ++++----
 drivers/target/target_core_sbc.c     |  4 ++--
 include/target/target_core_backend.h |  4 ++--
 4 files changed, 18 insertions(+), 19 deletions(-)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 8ddd561..b4956a5e 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -442,7 +442,7 @@ fd_do_prot_fill(struct se_device *se_dev, sector_t lba, sector_t nolb,
 }
 
 static int
-fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
+fd_do_prot_unmap(struct se_device *dev, sector_t lba, sector_t nolb)
 {
 	void *buf;
 	int rc;
@@ -454,7 +454,7 @@ fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 	}
 	memset(buf, 0xff, PAGE_SIZE);
 
-	rc = fd_do_prot_fill(cmd->se_dev, lba, nolb, buf, PAGE_SIZE);
+	rc = fd_do_prot_fill(dev, lba, nolb, buf, PAGE_SIZE);
 
 	free_page((unsigned long)buf);
 
@@ -462,14 +462,15 @@ fd_do_prot_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 }
 
 static sense_reason_t
-fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
+fd_execute_unmap(struct target_iostate *ios, sector_t lba, sector_t nolb)
 {
-	struct file *file = FD_DEV(cmd->se_dev)->fd_file;
+	struct se_device *dev = ios->se_dev;
+	struct file *file = FD_DEV(dev)->fd_file;
 	struct inode *inode = file->f_mapping->host;
 	int ret;
 
-	if (cmd->se_dev->dev_attrib.pi_prot_type) {
-		ret = fd_do_prot_unmap(cmd, lba, nolb);
+	if (ios->se_dev->dev_attrib.pi_prot_type) {
+		ret = fd_do_prot_unmap(dev, lba, nolb);
 		if (ret)
 			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
@@ -477,11 +478,10 @@ fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 	if (S_ISBLK(inode->i_mode)) {
 		/* The backend is block device, use discard */
 		struct block_device *bdev = inode->i_bdev;
-		struct se_device *dev = cmd->se_dev;
 
 		ret = blkdev_issue_discard(bdev,
 					   target_to_linux_sector(dev, lba),
-					   target_to_linux_sector(dev,  nolb),
+					   target_to_linux_sector(dev, nolb),
 					   GFP_KERNEL, 0);
 		if (ret < 0) {
 			pr_warn("FILEIO: blkdev_issue_discard() failed: %d\n",
@@ -490,9 +490,8 @@ fd_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
 		}
 	} else {
 		/* The backend is normal file, use fallocate */
-		struct se_device *se_dev = cmd->se_dev;
-		loff_t pos = lba * se_dev->dev_attrib.block_size;
-		unsigned int len = nolb * se_dev->dev_attrib.block_size;
+		loff_t pos = lba * dev->dev_attrib.block_size;
+		unsigned int len = nolb * dev->dev_attrib.block_size;
 		int mode = FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE;
 
 		if (!file->f_op->fallocate)
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 9623198..00781c8 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -393,15 +393,15 @@ iblock_execute_sync_cache(struct target_iostate *ios, bool immed)
 }
 
 static sense_reason_t
-iblock_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
+iblock_execute_unmap(struct target_iostate *ios, sector_t lba, sector_t nolb)
 {
-	struct block_device *bdev = IBLOCK_DEV(cmd->se_dev)->ibd_bd;
-	struct se_device *dev = cmd->se_dev;
+	struct block_device *bdev = IBLOCK_DEV(ios->se_dev)->ibd_bd;
+	struct se_device *dev = ios->se_dev;
 	int ret;
 
 	ret = blkdev_issue_discard(bdev,
 				   target_to_linux_sector(dev, lba),
-				   target_to_linux_sector(dev,  nolb),
+				   target_to_linux_sector(dev, nolb),
 				   GFP_KERNEL, 0);
 	if (ret < 0) {
 		pr_err("blkdev_issue_discard() failed: %d\n", ret);
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index a92f169..4fcbe8c 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -230,7 +230,7 @@ sbc_execute_write_same_unmap(struct target_iostate *ios)
 	sense_reason_t ret;
 
 	if (nolb) {
-		ret = ops->execute_unmap(cmd, cmd->t_iostate.t_task_lba, nolb);
+		ret = ops->execute_unmap(&cmd->t_iostate, cmd->t_iostate.t_task_lba, nolb);
 		if (ret)
 			return ret;
 	}
@@ -1260,7 +1260,7 @@ sbc_execute_unmap(struct target_iostate *ios)
 			goto err;
 		}
 
-		ret = ops->execute_unmap(cmd, lba, range);
+		ret = ops->execute_unmap(ios, lba, range);
 		if (ret)
 			goto err;
 
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index f845ff0..9efe718 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -51,8 +51,8 @@ struct sbc_ops {
 	sense_reason_t (*execute_sync_cache)(struct target_iostate *ios, bool immed);
 	sense_reason_t (*execute_write_same)(struct target_iostate *ios,
 					sector_t (*get_sectors)(struct target_iostate *));
-	sense_reason_t (*execute_unmap)(struct se_cmd *cmd,
-				sector_t lba, sector_t nolb);
+	sense_reason_t (*execute_unmap)(struct target_iostate *ios,
+					sector_t lba, sector_t nolb);
 };
 
 int	transport_backend_register(const struct target_backend_ops *);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 14/14] target: Make sbc_ops accessable via target_backend_ops
  2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
                   ` (12 preceding siblings ...)
  2016-06-01 21:48 ` [PATCH 13/14] target/sbc: Convert sbc_ops->execute_unmap " Nicholas A. Bellinger
@ 2016-06-01 21:48 ` Nicholas A. Bellinger
  13 siblings, 0 replies; 15+ messages in thread
From: Nicholas A. Bellinger @ 2016-06-01 21:48 UTC (permalink / raw)
  To: target-devel
  Cc: linux-scsi, Jens Axboe, Christoph Hellwig, Martin Petersen,
	Sagi Grimberg, Hannes Reinecke, Mike Christie, Dave B Minturn,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This patch allows backend driver sbc_ops function pointers
to be accessed externally, so external target consumers
can perform target_iostate + target_iomem I/O submission
outside of /sys/kernel/config/target/$FABRIC/ users.

Specifically, IBLOCK, FILEIO, and RAMDISK have been
enabled, while PSCSI and TCMU have left sbc_ops NULL
as they perform SCSI CDB pass-through.

Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin Petersen <martin.petersen@oracle.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_file.c    | 1 +
 drivers/target/target_core_iblock.c  | 1 +
 drivers/target/target_core_pscsi.c   | 1 +
 drivers/target/target_core_rd.c      | 1 +
 drivers/target/target_core_user.c    | 1 +
 include/target/target_core_backend.h | 4 ++++
 6 files changed, 9 insertions(+)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index b4956a5e..6f0064e 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -815,6 +815,7 @@ static const struct target_backend_ops fileio_ops = {
 	.inquiry_prod		= "FILEIO",
 	.inquiry_rev		= FD_VERSION,
 	.owner			= THIS_MODULE,
+	.sbc_ops		= &fd_sbc_ops,
 	.attach_hba		= fd_attach_hba,
 	.detach_hba		= fd_detach_hba,
 	.alloc_device		= fd_alloc_device,
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 00781c8..29d3167 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -833,6 +833,7 @@ static const struct target_backend_ops iblock_ops = {
 	.inquiry_prod		= "IBLOCK",
 	.inquiry_rev		= IBLOCK_VERSION,
 	.owner			= THIS_MODULE,
+	.sbc_ops		= &iblock_sbc_ops,
 	.attach_hba		= iblock_attach_hba,
 	.detach_hba		= iblock_detach_hba,
 	.alloc_device		= iblock_alloc_device,
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index c52f943..4284dbf 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -1127,6 +1127,7 @@ static void pscsi_req_done(struct request *req, int uptodate)
 static const struct target_backend_ops pscsi_ops = {
 	.name			= "pscsi",
 	.owner			= THIS_MODULE,
+	.sbc_ops		= NULL,
 	.transport_flags	= TRANSPORT_FLAG_PASSTHROUGH,
 	.attach_hba		= pscsi_attach_hba,
 	.detach_hba		= pscsi_detach_hba,
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index a38a37f..e9df036 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -662,6 +662,7 @@ static const struct target_backend_ops rd_mcp_ops = {
 	.name			= "rd_mcp",
 	.inquiry_prod		= "RAMDISK-MCP",
 	.inquiry_rev		= RD_MCP_VERSION,
+	.sbc_ops		= &rd_sbc_ops,
 	.attach_hba		= rd_attach_hba,
 	.detach_hba		= rd_detach_hba,
 	.alloc_device		= rd_alloc_device,
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 3467560..ec6142b 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1151,6 +1151,7 @@ static const struct target_backend_ops tcmu_ops = {
 	.name			= "user",
 	.owner			= THIS_MODULE,
 	.transport_flags	= TRANSPORT_FLAG_PASSTHROUGH,
+	.sbc_ops		= NULL,
 	.attach_hba		= tcmu_attach_hba,
 	.detach_hba		= tcmu_detach_hba,
 	.alloc_device		= tcmu_alloc_device,
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 9efe718..15f731f 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -8,6 +8,10 @@ struct target_backend_ops {
 	char inquiry_prod[16];
 	char inquiry_rev[4];
 	struct module *owner;
+	/*
+	 * Used by NVMe-target for se_cmd dispatch without SCSI CDB parsing
+	 */
+	struct sbc_ops *sbc_ops;
 
 	u8 transport_flags;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2016-06-01 21:49 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-01 21:48 [PATCH 00/14] target: Allow backends to operate independent of se_cmd Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 01/14] target: Fix for hang of Ordered task in TCM Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 02/14] target: Add target_iomem descriptor Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 03/14] target: Add target_iostate descriptor Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 04/14] target: Add target_complete_ios wrapper Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 05/14] target: Setup target_iostate memory in __target_execute_cmd Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 06/14] target: Convert se_cmd->execute_cmd to target_iostate Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 07/14] target/sbc: Convert sbc_ops->execute_rw " Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 08/14] target/sbc: Convert sbc_dif_copy_prot " Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 09/14] target/file: Convert sbc_dif_verify " Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 10/14] target/iblock: Fold iblock_req into target_iostate Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 11/14] target/sbc: Convert sbc_ops->execute_sync_cache to target_iostate Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 12/14] target/sbc: Convert sbc_ops->execute_write_same " Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 13/14] target/sbc: Convert sbc_ops->execute_unmap " Nicholas A. Bellinger
2016-06-01 21:48 ` [PATCH 14/14] target: Make sbc_ops accessable via target_backend_ops Nicholas A. Bellinger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).