linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/2] tcm: Add support for BIDI-COMMANDS and XDWRITE_READ_10 emulation
@ 2010-09-15 22:41 Nicholas A. Bellinger
  2010-09-19 13:29 ` Boaz Harrosh
  0 siblings, 1 reply; 4+ messages in thread
From: Nicholas A. Bellinger @ 2010-09-15 22:41 UTC (permalink / raw)
  To: linux-scsi, linux-kernel, Boaz Harrosh, FUJITA Tomonori
  Cc: Mike Christie, Hannes Reinecke, James Bottomley,
	Konrad Rzeszutek Wilk, Douglas Gilbert, Joe Eykholt,
	Nicholas Bellinger

From: Nicholas Bellinger <nab@linux-iscsi.org>

This v2 patch series adds BIDI-COMMANDS support to TCM Core.  This
includes the following for struct se_cmd to handle BIDI READ payloads using a new
struct se_transport_task->t_mem_bidi_list and struct se_transport_task->t_tasks_se_bidi_num.
The model employed is to keep the WRITE payload at struct se_transport_task->t_mem_list,
and add a new BIDI READ payload memory list at struct se_transport_task->t_mem_bidi_list.

*) descriptor setup:

This patch adds support for XDWRITEREAD_10 within transport_generic_cmd_sequencer(),
and sets up the new struct se_cmd->transport_xor_callback() completion function
at transport_xor_callback().  It then updates transport_new_cmd_obj() to handle the
BIDI READ case.

It also adds support for checking BIDI READ data poaylods in transport_generic_map_mem_to_cmd()
to signal the allocation of struct se_transport_task->t_mem_bidi_list in the same function.

*) memory mapping:

This patch updates transport_generic_get_cdb_count() to accept a enum dma_data_direction
parameter to handle the BIDI and the existing WRITE/READ cases for SCF_SCSI_DATA_SG_IO_CDB.
This patch also updates transport_generic_map_mem_to_cmd() to accept a 'void *mem_bidi_in'
and 'u32 se_mem_bidi_num' from BIDI capable TCM fabric modules.

It then updates transport_generic_get_task() and struct se_cmd->transport_get_task()
to accept a enum dma_data_direction function parameter.

*) descriptor callback:

For the struct se_cmd callback, it updates transport_generic_complete_ok() to support
BIDI-COMMANDS and looks for the new generic struct se_cmd->transport_complete_callback() in
transport_generic_cmd_sequencer() to perform the post READ/WRITE XOR emulation.
This also includes the additon of transport_memcpy_se_mem_read_contig() used to copy the
WRITE scatterlists into a local contigious buffer for the XOR instructions within
transport_xor_callback();

*) descriptor release:

Update transport_free_pages() to walk the new T_TASK(cmd)->t_mem_bidi_list (when available)
and release struct se_mem and pages.

So far this has been tested with TCM_Loop using BSG w/ userspace code generating
BIDI XDWRITE_READ_10 CDBs.

Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
---
 drivers/target/target_core_pr.c        |    3 +-
 drivers/target/target_core_transport.c |  278 +++++++++++++++++++++++++++-----
 include/target/target_core_base.h      |   10 +-
 include/target/target_core_transport.h |   10 +-
 4 files changed, 254 insertions(+), 47 deletions(-)

diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
index 4c408cb..141ce48 100644
--- a/drivers/target/target_core_pr.c
+++ b/drivers/target/target_core_pr.c
@@ -497,8 +497,7 @@ static int core_scsi3_pr_seq_non_holder(
 	 * WRITE_EXCLUSIVE_* reservation.
 	 */
 	if ((we) && !(registered_nexus)) {
-		if ((cmd->data_direction == DMA_TO_DEVICE) ||
-		    (cmd->data_direction == DMA_BIDIRECTIONAL)) {
+		if (cmd->data_direction == DMA_TO_DEVICE) {
 			/*
 			 * Conflict for write exclusive
 			 */
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 517a59c..e709771 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -2529,7 +2529,8 @@ static inline int transport_check_device_cdb_sector_count(
 static struct se_task *transport_generic_get_task(
 	struct se_transform_info *ti,
 	struct se_cmd *cmd,
-	void *se_obj_ptr)
+	void *se_obj_ptr,
+	enum dma_data_direction data_direction)
 {
 	struct se_task *task;
 	struct se_device *dev = SE_DEV(cmd);
@@ -2625,7 +2626,8 @@ static int transport_process_control_sg_transform(
 		return -1;
 	}
 
-	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr);
+	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr,
+				cmd->data_direction);
 	if (!(task))
 		return -1;
 
@@ -2665,7 +2667,8 @@ static int transport_process_control_nonsg_transform(
 	unsigned char *cdb;
 	struct se_task *task;
 
-	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr);
+	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr,
+				cmd->data_direction);
 	if (!(task))
 		return -1;
 
@@ -2699,7 +2702,8 @@ static int transport_process_non_data_transform(
 	unsigned char *cdb;
 	struct se_task *task;
 
-	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr);
+	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr,
+				cmd->data_direction);
 	if (!(task))
 		return -1;
 
@@ -2741,11 +2745,6 @@ struct se_cmd *__transport_alloc_se_cmd(
 	unsigned char *sense_buffer;
 	int gfp_type = (in_interrupt()) ? GFP_ATOMIC : GFP_KERNEL;
 
-	if (data_direction == DMA_BIDIRECTIONAL) {
-		printk(KERN_ERR "SCSI BiDirectional mode not supported yet\n");
-		return ERR_PTR(-ENOSYS);
-	}
-
 	cmd = kmem_cache_zalloc(se_cmd_cache, gfp_type);
 	if (!(cmd)) {
 		printk(KERN_ERR "kmem_cache_alloc() failed for se_cmd_cache\n");
@@ -5183,6 +5182,54 @@ int transport_generic_emulate_request_sense(
 }
 EXPORT_SYMBOL(transport_generic_emulate_request_sense);
 
+static void transport_xor_callback(struct se_cmd *cmd)
+{
+	unsigned char *buf, *addr;
+	struct se_mem *se_mem;
+	unsigned int offset;
+	int i;
+	/*
+	 * From sbc3r22.pdf section 5.48 XDWRITEREAD (10) command
+	 *
+	 * 1) read the specified logical block(s);
+	 * 2) transfer logical blocks from the data-out buffer;
+	 * 3) XOR the logical blocks transferred from the data-out buffer with
+	 *    the logical blocks read, storing the resulting XOR data in a buffer;
+	 * 4) if the DISABLE WRITE bit is set to zero, then write the logical
+	 *    blocks transferred from the data-out buffer; and
+	 * 5) transfer the resulting XOR data to the data-in buffer.
+	 */
+	buf = kmalloc(cmd->data_length, GFP_KERNEL);
+	if (!(buf)) {
+		printk(KERN_ERR "Unable to allocate xor_callback buf\n");
+		return;
+	}
+	/*
+	 * Copy the scatterlist WRITE buffer located at T_TASK(cmd)->t_mem_list
+	 * into the locally allocated *buf
+	 */
+	transport_memcpy_se_mem_read_contig(cmd, buf, T_TASK(cmd)->t_mem_list);
+	/*
+	 * Now perform the XOR against the BIDI read memory located at
+	 * T_TASK(cmd)->t_mem_bidi_list
+	 */
+
+	offset = 0;
+	list_for_each_entry(se_mem, T_TASK(cmd)->t_mem_bidi_list, se_list) {
+		addr = (unsigned char *)kmap_atomic(se_mem->se_page, KM_USER0);
+		if (!(addr))
+			goto out;
+
+		for (i = 0; i < se_mem->se_len; i++)
+			*(addr + se_mem->se_off + i) ^= *(buf + offset + i);
+
+		offset += se_mem->se_len;
+		kunmap_atomic(addr, KM_USER0);
+	}
+out:
+	kfree(buf);
+}
+
 /*
  * Used to obtain Sense Data from underlying Linux/SCSI struct scsi_cmnd
  */
@@ -5472,6 +5519,26 @@ static int transport_generic_cmd_sequencer(
 		T_TASK(cmd)->t_tasks_fua = (cdb[1] & 0x8);
 		ret = TGCS_DATA_SG_IO_CDB;
 		break;
+	case XDWRITEREAD_10:
+		SET_GENERIC_TRANSPORT_FUNCTIONS(cmd);
+		if ((cmd->data_direction != DMA_TO_DEVICE) ||
+		    !(T_TASK(cmd)->t_tasks_bidi))
+			return TGCS_INVALID_CDB_FIELD;
+		sectors = transport_get_sectors_10(cdb, cmd, &sector_ret);
+		if (sector_ret)
+			return TGCS_UNSUPPORTED_CDB;
+		size = transport_get_size(sectors, cdb, cmd);
+		transport_dev_get_mem_SG(cmd->se_orig_obj_ptr, cmd);
+		transport_get_maps(cmd);
+		cmd->transport_split_cdb = &split_cdb_XX_10;
+		cmd->transport_get_lba = &transport_lba_32;
+		/*
+		 * Setup BIDI XOR callback to be run during transport_generic_complete_ok()
+		 */
+		cmd->transport_complete_callback = &transport_xor_callback;
+		T_TASK(cmd)->t_tasks_fua = (cdb[1] & 0x8);
+		ret = TGCS_DATA_SG_IO_CDB;
+		break;
 	case 0xa3:
 		SET_GENERIC_TRANSPORT_FUNCTIONS(cmd);
 		if (TRANSPORT(dev)->get_device_type(dev) != TYPE_ROM) {
@@ -6096,6 +6163,33 @@ void transport_memcpy_read_contig(
 }
 EXPORT_SYMBOL(transport_memcpy_read_contig);
 
+void transport_memcpy_se_mem_read_contig(
+	struct se_cmd *cmd,
+	unsigned char *dst,
+	struct list_head *se_mem_list)
+{
+	struct se_mem *se_mem;
+	void *src;
+	u32 length = 0, total_length = cmd->data_length;
+
+	list_for_each_entry(se_mem, se_mem_list, se_list) {
+		length = se_mem->se_len;
+
+		if (length > total_length)
+			length = total_length;
+
+		src = page_address(se_mem->se_page) + se_mem->se_off;
+
+		memcpy(dst, src, length);
+
+		if (!(total_length -= length))
+			return;
+
+		dst += length;
+	}
+}
+
+
 /*     transport_generic_passthrough():
  *
  *
@@ -6249,6 +6343,12 @@ void transport_generic_complete_ok(struct se_cmd *cmd)
 			return;
 		}
 	}
+	/*
+	 * Check for a callback, used by amoungst other things
+	 * XDWRITE_READ_10 emulation.
+	 */
+	if (cmd->transport_complete_callback)
+		cmd->transport_complete_callback(cmd);
 
 	switch (cmd->data_direction) {
 	case DMA_FROM_DEVICE:
@@ -6267,6 +6367,19 @@ void transport_generic_complete_ok(struct se_cmd *cmd)
 				cmd->data_length;
 		}
 		spin_unlock(&cmd->se_lun->lun_sep_lock);
+		/*
+		 * Check if we need to send READ payload for BIDI-COMMAND
+		 */
+		if (T_TASK(cmd)->t_mem_bidi_list != NULL) {
+			spin_lock(&cmd->se_lun->lun_sep_lock);
+			if (SE_LUN(cmd)->lun_sep) {
+				SE_LUN(cmd)->lun_sep->sep_stats.tx_data_octets +=
+					cmd->data_length;
+			}
+			spin_unlock(&cmd->se_lun->lun_sep_lock);
+			CMD_TFO(cmd)->queue_data_in(cmd);
+			break;
+		}
 		/* Fall through for DMA_TO_DEVICE */
 	case DMA_NONE:
 		CMD_TFO(cmd)->queue_status(cmd);
@@ -6347,6 +6460,23 @@ static inline void transport_free_pages(struct se_cmd *cmd)
 		kmem_cache_free(se_mem_cache, se_mem);
 	}
 
+	if (T_TASK(cmd)->t_mem_bidi_list && T_TASK(cmd)->t_tasks_se_bidi_num) {
+		list_for_each_entry_safe(se_mem, se_mem_tmp,
+				T_TASK(cmd)->t_mem_bidi_list, se_list) {
+			/*
+			 * We only release call __free_page(struct se_mem->se_page) when
+			 * SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is NOT in use,
+			 */
+			if (free_page)
+				__free_page(se_mem->se_page);
+
+			list_del(&se_mem->se_list);
+			kmem_cache_free(se_mem_cache, se_mem);
+		}
+	}
+
+	kfree(T_TASK(cmd)->t_mem_bidi_list);
+	T_TASK(cmd)->t_mem_bidi_list = NULL;
 	kfree(T_TASK(cmd)->t_mem_list);
 	T_TASK(cmd)->t_mem_list = NULL;
 	T_TASK(cmd)->t_tasks_se_num = 0;
@@ -6477,7 +6607,9 @@ release_cmd:
 int transport_generic_map_mem_to_cmd(
 	struct se_cmd *cmd,
 	void *mem,
-	u32 se_mem_num)
+	u32 se_mem_num,
+	void *mem_bidi_in,
+	u32 se_mem_bidi_num)
 {
 	u32 se_mem_cnt_out = 0;
 	int ret;
@@ -6489,6 +6621,12 @@ int transport_generic_map_mem_to_cmd(
 	 * struct se_mem elements...
 	 */
 	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM)) {
+		if ((mem_bidi_in) || (se_mem_bidi_num)) {
+			printk(KERN_ERR "SCF_CMD_PASSTHROUGH_NOALLOC not supported"
+				" with BIDI-COMMAND\n");
+			return -ENOSYS;
+		}
+
 		T_TASK(cmd)->t_mem_list = (struct list_head *)mem;
 		T_TASK(cmd)->t_tasks_se_num = se_mem_num;
 		cmd->se_cmd_flags |= SCF_CMD_PASSTHROUGH_NOALLOC;
@@ -6507,14 +6645,35 @@ int transport_generic_map_mem_to_cmd(
 		 */ 
 		T_TASK(cmd)->t_mem_list = transport_init_se_mem_list();
 		if (!(T_TASK(cmd)->t_mem_list))
-			return -1;
+			return -ENOMEM;
 
 		ret = transport_map_sg_to_mem(cmd,
 			T_TASK(cmd)->t_mem_list, mem, &se_mem_cnt_out);
 		if (ret < 0)
-			return -1;
+			return -ENOMEM;
 
 		T_TASK(cmd)->t_tasks_se_num = se_mem_cnt_out;
+		/*
+		 * Setup BIDI READ list of struct se_mem elements
+		 */
+		if ((mem_bidi_in) && (se_mem_bidi_num)) {
+			T_TASK(cmd)->t_mem_bidi_list = transport_init_se_mem_list();
+			if (!(T_TASK(cmd)->t_mem_bidi_list)) {
+				kfree(T_TASK(cmd)->t_mem_list);
+				return -ENOMEM;
+			}
+			se_mem_cnt_out = 0;
+
+			ret = transport_map_sg_to_mem(cmd,
+				T_TASK(cmd)->t_mem_bidi_list, mem_bidi_in,
+				&se_mem_cnt_out);
+			if (ret < 0) {
+				kfree(T_TASK(cmd)->t_mem_list);
+				return -ENOMEM;
+			}
+
+			T_TASK(cmd)->t_tasks_se_bidi_num = se_mem_cnt_out;
+		}
 		cmd->se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
 
 	} else if (cmd->se_cmd_flags & SCF_SCSI_CONTROL_NONSG_IO_CDB) {
@@ -6610,6 +6769,11 @@ non_scsi_data:
  */
 int transport_generic_do_transform(struct se_cmd *cmd, struct se_transform_info *ti)
 {
+	if (!(cmd->transport_cdb_transform)) {
+		dump_stack();
+		return -1;
+	}
+
 	if (cmd->transport_cdb_transform(cmd, ti) < 0)
 		return -1;
 
@@ -6656,9 +6820,8 @@ int transport_new_cmd_obj(
 	struct se_transform_info *ti,
 	int post_execute)
 {
-	u32 task_cdbs = 0;
-	struct se_mem *se_mem_out = NULL;
 	struct se_device *dev = SE_DEV(cmd);
+	u32 task_cdbs = 0, rc;
 
 	if (!(cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB)) {
 		task_cdbs++;
@@ -6666,11 +6829,32 @@ int transport_new_cmd_obj(
 	} else {
 		ti->ti_set_counts = 1;
 		ti->ti_dev = dev;
-
+		/*
+		 * Setup any BIDI READ tasks and memory from
+		 * T_TASK(cmd)->t_mem_bidi_list so the READ struct se_tasks
+		 * are queued first..
+		 */
+		if (T_TASK(cmd)->t_mem_bidi_list != NULL) {
+			rc = transport_generic_get_cdb_count(cmd, ti,
+				T_TASK(cmd)->t_task_lba,
+				T_TASK(cmd)->t_tasks_sectors,
+				DMA_FROM_DEVICE, T_TASK(cmd)->t_mem_bidi_list);
+			if (!(rc)) {
+				cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
+				cmd->scsi_sense_reason =
+					TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+				return PYX_TRANSPORT_LU_COMM_FAILURE;
+			}
+			ti->ti_set_counts = 0;
+		}
+		/*
+		 * Setup the tasks and memory from T_TASK(cmd)->t_mem_list
+		 * Note for BIDI transfers this will contain the WRITE payload
+		 */
 		task_cdbs = transport_generic_get_cdb_count(cmd, ti,
 				T_TASK(cmd)->t_task_lba,
 				T_TASK(cmd)->t_tasks_sectors,
-				NULL, &se_mem_out);
+				cmd->data_direction, T_TASK(cmd)->t_mem_list);
 		if (!(task_cdbs)) {
 			cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
 			cmd->scsi_sense_reason =
@@ -6743,7 +6927,17 @@ int transport_generic_get_mem(struct se_cmd *cmd, u32 length, u32 dma_size)
 
 	T_TASK(cmd)->t_mem_list = transport_init_se_mem_list();
 	if (!(T_TASK(cmd)->t_mem_list))
-		return -1;
+		return -ENOMEM;
+	/*
+	 * Setup BIDI-COMMAND READ list of struct se_mem elements
+	 */
+	if (T_TASK(cmd)->t_tasks_bidi) {
+		T_TASK(cmd)->t_mem_bidi_list = transport_init_se_mem_list();
+		if (!(T_TASK(cmd)->t_mem_bidi_list)) {
+			kfree(T_TASK(cmd)->t_mem_list);
+			return -ENOMEM;
+		}
+	}
 
 	while (length) {
 		se_mem = kmem_cache_zalloc(se_mem_cache, GFP_KERNEL);
@@ -7240,28 +7434,28 @@ u32 transport_generic_get_cdb_count(
 	struct se_transform_info *ti,
 	unsigned long long starting_lba,
 	u32 sectors,
-	struct se_mem *se_mem_in,
-	struct se_mem **se_mem_out)
+	enum dma_data_direction data_direction,
+	struct list_head *mem_list)
 {
 	unsigned char *cdb = NULL;
 	struct se_task *task;
-	struct se_mem *se_mem, *se_mem_lout = NULL;
+	struct se_mem *se_mem = NULL, *se_mem_lout = NULL;
 	struct se_device *dev = SE_DEV(cmd);
 	int max_sectors_set = 0, ret;
 	u32 task_offset_in = 0, se_mem_cnt = 0, task_cdbs = 0;
 	unsigned long long lba;
 
-	if (!se_mem_in) {
-		list_for_each_entry(se_mem_in, T_TASK(cmd)->t_mem_list, se_list)
-			break;
-
-		if (!se_mem_in) {
-			printk(KERN_ERR "se_mem_in is NULL\n");
-			return 0;
-		}
+	if (!mem_list) {
+		printk(KERN_ERR "mem_list is NULL in transport_generic_get"
+				"_cdb_count()\n");
+		return 0;
 	}
-	se_mem = se_mem_in;
-
+	/*
+	 * While using RAMDISK_DR backstores is the only case where
+	 * mem_list will ever be empty at this point.
+	 */
+	if (!(list_empty(mem_list)))
+		se_mem = list_entry(mem_list->next, struct se_mem, se_list);
 	/*
 	 * Locate the start volume segment in which the received LBA will be
 	 * executed upon.
@@ -7280,7 +7474,12 @@ u32 transport_generic_get_cdb_count(
 			CMD_TFO(cmd)->get_task_tag(cmd), lba, sectors,
 			transport_dev_end_lba(dev));
 
-		task = cmd->transport_get_task(ti, cmd, dev);
+		if (!(cmd->transport_get_task)) {
+			dump_stack();
+			goto out;
+		}
+
+		task = cmd->transport_get_task(ti, cmd, dev, data_direction);
 		if (!(task))
 			goto out;
 
@@ -7293,7 +7492,7 @@ u32 transport_generic_get_cdb_count(
 		task->task_size = (task->task_sectors *
 				   DEV_ATTRIB(dev)->block_size);
 		task->transport_map_task = transport_dev_get_map_SG(dev,
-					cmd->data_direction);
+					data_direction);
 
 		cdb = TRANSPORT(dev)->get_cdb(task);
 		if ((cdb)) {
@@ -7306,14 +7505,13 @@ u32 transport_generic_get_cdb_count(
 		 * Perform the SE OBJ plugin and/or Transport plugin specific
 		 * mapping for T_TASK(cmd)->t_mem_list.
 		 */
-		ret = transport_do_se_mem_map(dev, task,
-				T_TASK(cmd)->t_mem_list, NULL, se_mem,
-				&se_mem_lout, &se_mem_cnt, &task_offset_in);
+		ret = transport_do_se_mem_map(dev, task, mem_list,
+				NULL, se_mem, &se_mem_lout, &se_mem_cnt,
+				&task_offset_in);
 		if (ret < 0)
 			goto out;
 
 		se_mem = se_mem_lout;
-		*se_mem_out = se_mem_lout;
 		task_cdbs++;
 
 		DEBUG_VOL("Incremented task_cdbs(%u) task->task_sg_num(%u)\n",
@@ -7333,8 +7531,9 @@ u32 transport_generic_get_cdb_count(
 		atomic_inc(&T_TASK(cmd)->t_se_count);
 	}
 
-	DEBUG_VOL("ITT[0x%08x] total cdbs(%u)\n",
-		CMD_TFO(cmd)->get_task_tag(cmd), task_cdbs);
+	DEBUG_VOL("ITT[0x%08x] total %s cdbs(%u)\n",
+		CMD_TFO(cmd)->get_task_tag(cmd), (data_direction == DMA_TO_DEVICE)
+		? "DMA_TO_DEVICE" : "DMA_FROM_DEVICE", task_cdbs);
 
 	return task_cdbs;
 out:
@@ -8129,8 +8328,7 @@ void transport_send_task_abort(struct se_cmd *cmd)
 	 * response.  This response with TASK_ABORTED status will be
 	 * queued back to fabric module by transport_check_aborted_status().
 	 */
-	if ((cmd->data_direction == DMA_TO_DEVICE) ||
-	    (cmd->data_direction == DMA_BIDIRECTIONAL)) {
+	if (cmd->data_direction == DMA_TO_DEVICE) {
 		if (CMD_TFO(cmd)->write_pending_status(cmd) != 0) {
 			atomic_inc(&T_TASK(cmd)->t_transport_aborted);
 			smp_mb__after_atomic_inc();
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index e9b98e9..97e9715 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -414,11 +414,13 @@ struct se_transport_task {
 	unsigned long long	t_task_lba;
 	int			t_tasks_failed;
 	int			t_tasks_fua;
+	int			t_tasks_bidi:1;
 	u32			t_task_cdbs;
 	u32			t_tasks_check;
 	u32			t_tasks_no;
 	u32			t_tasks_sectors;
 	u32			t_tasks_se_num;
+	u32			t_tasks_se_bidi_num;
 	u32			t_tasks_sg_chained_no;
 	atomic_t		t_fe_count;
 	atomic_t		t_se_count;
@@ -447,8 +449,10 @@ struct se_transport_task {
 	struct scatterlist	t_tasks_sg_bounce;
 	void			*t_task_buf;
 	void			*t_task_pt_buf;
-	struct list_head	t_task_list;
 	struct list_head	*t_mem_list;
+	/* Used for BIDI READ */
+	struct list_head	*t_mem_bidi_list;
+	struct list_head	t_task_list;
 } ____cacheline_aligned;
 
 struct se_task {
@@ -598,7 +602,8 @@ struct se_cmd {
 	u32 (*transport_get_lba)(unsigned char *);
 	unsigned long long (*transport_get_long_lba)(unsigned char *);
 	struct se_task *(*transport_get_task)(struct se_transform_info *,
-					struct se_cmd *, void *);
+					struct se_cmd *, void *,
+					enum dma_data_direction);
 	int (*transport_map_buffers_to_tasks)(struct se_cmd *);
 	void (*transport_map_SG_segments)(struct se_unmap_sg *);
 	void (*transport_passthrough_done)(struct se_cmd *);
@@ -607,6 +612,7 @@ struct se_cmd {
 					struct se_unmap_sg *);
 	void (*transport_split_cdb)(unsigned long long, u32 *, unsigned char *);
 	void (*transport_wait_for_tasks)(struct se_cmd *, int, int);
+	void (*transport_complete_callback)(struct se_cmd *);
 	void (*callback)(struct se_cmd *cmd, void *callback_arg,
 			int complete_status);
 	void *callback_arg;
diff --git a/include/target/target_core_transport.h b/include/target/target_core_transport.h
index ef4c084..20702d7 100644
--- a/include/target/target_core_transport.h
+++ b/include/target/target_core_transport.h
@@ -232,6 +232,8 @@ extern void transport_memcpy_write_contig(struct se_cmd *, struct scatterlist *,
 				unsigned char *);
 extern void transport_memcpy_read_contig(struct se_cmd *, unsigned char *,
 				struct scatterlist *);
+extern void transport_memcpy_se_mem_read_contig(struct se_cmd *,
+				unsigned char *, struct list_head *);
 extern int transport_generic_passthrough_async(struct se_cmd *cmd,
 				void(*callback)(struct se_cmd *cmd,
 				void *callback_arg, int complete_status),
@@ -242,7 +244,8 @@ extern void transport_generic_complete_ok(struct se_cmd *);
 extern void transport_free_dev_tasks(struct se_cmd *);
 extern void transport_release_fe_cmd(struct se_cmd *);
 extern int transport_generic_remove(struct se_cmd *, int, int);
-extern int transport_generic_map_mem_to_cmd(struct se_cmd *cmd, void *, u32);
+extern int transport_generic_map_mem_to_cmd(struct se_cmd *cmd, void *, u32,
+				void *, u32);
 extern int transport_lun_wait_for_tasks(struct se_cmd *, struct se_lun *);
 extern int transport_clear_lun_from_sessions(struct se_lun *);
 extern int transport_check_aborted_status(struct se_cmd *, int);
@@ -271,8 +274,9 @@ extern int transport_map_mem_to_sg(struct se_task *, struct list_head *,
 extern void transport_do_task_sg_chain(struct se_cmd *);
 extern u32 transport_generic_get_cdb_count(struct se_cmd *,
 					struct se_transform_info *,
-					unsigned long long, u32, struct se_mem *,
-					struct se_mem **);
+					unsigned long long, u32,
+					enum dma_data_direction,
+					struct list_head *);
 extern int transport_generic_new_cmd(struct se_cmd *);
 extern void transport_generic_process_write(struct se_cmd *);
 extern int transport_generic_do_tmr(struct se_cmd *);
-- 
1.5.6.5

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 1/2] tcm: Add support for BIDI-COMMANDS and XDWRITE_READ_10 emulation
  2010-09-15 22:41 [PATCH v2 1/2] tcm: Add support for BIDI-COMMANDS and XDWRITE_READ_10 emulation Nicholas A. Bellinger
@ 2010-09-19 13:29 ` Boaz Harrosh
  2010-09-19 13:35   ` Boaz Harrosh
  2010-09-20  9:02   ` Nicholas A. Bellinger
  0 siblings, 2 replies; 4+ messages in thread
From: Boaz Harrosh @ 2010-09-19 13:29 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: linux-scsi, linux-kernel, FUJITA Tomonori, Mike Christie,
	Hannes Reinecke, James Bottomley, Konrad Rzeszutek Wilk,
	Douglas Gilbert, Joe Eykholt

On 09/16/2010 12:41 AM, Nicholas A. Bellinger wrote:
> From: Nicholas Bellinger <nab@linux-iscsi.org>
> 

Hi dear Nicholas

I still have a few reservations regarding the use of the:
+	int			t_tasks_bidi:1;

at struct se_transport_task at minimum I'd use the t_tasks_se_bidi_num
as a look ahead. But I hate that as well. I suspect none of this is
needed. But

At http://git.kernel.org/?p=linux/kernel/git/nab/lio-4.0.git I still get
an old head without these or the CDB32 stuff.

Where can I find a git web. With latest bits? I'd like to have a
closer look.

(You know that I have a vested interest in all this I need it to be solid)

> This v2 patch series adds BIDI-COMMANDS support to TCM Core.  This
> includes the following for struct se_cmd to handle BIDI READ payloads using a new
> struct se_transport_task->t_mem_bidi_list and struct se_transport_task->t_tasks_se_bidi_num.
> The model employed is to keep the WRITE payload at struct se_transport_task->t_mem_list,
> and add a new BIDI READ payload memory list at struct se_transport_task->t_mem_bidi_list.
> 
> *) descriptor setup:
> 
> This patch adds support for XDWRITEREAD_10 within transport_generic_cmd_sequencer(),
> and sets up the new struct se_cmd->transport_xor_callback() completion function
> at transport_xor_callback().  It then updates transport_new_cmd_obj() to handle the
> BIDI READ case.
> 
> It also adds support for checking BIDI READ data poaylods in transport_generic_map_mem_to_cmd()
> to signal the allocation of struct se_transport_task->t_mem_bidi_list in the same function.
> 
> *) memory mapping:
> 
> This patch updates transport_generic_get_cdb_count() to accept a enum dma_data_direction
> parameter to handle the BIDI and the existing WRITE/READ cases for SCF_SCSI_DATA_SG_IO_CDB.
> This patch also updates transport_generic_map_mem_to_cmd() to accept a 'void *mem_bidi_in'
> and 'u32 se_mem_bidi_num' from BIDI capable TCM fabric modules.
> 
> It then updates transport_generic_get_task() and struct se_cmd->transport_get_task()
> to accept a enum dma_data_direction function parameter.
> 
> *) descriptor callback:
> 
> For the struct se_cmd callback, it updates transport_generic_complete_ok() to support
> BIDI-COMMANDS and looks for the new generic struct se_cmd->transport_complete_callback() in
> transport_generic_cmd_sequencer() to perform the post READ/WRITE XOR emulation.
> This also includes the additon of transport_memcpy_se_mem_read_contig() used to copy the
> WRITE scatterlists into a local contigious buffer for the XOR instructions within
> transport_xor_callback();
> 
> *) descriptor release:
> 
> Update transport_free_pages() to walk the new T_TASK(cmd)->t_mem_bidi_list (when available)
> and release struct se_mem and pages.
> 
> So far this has been tested with TCM_Loop using BSG w/ userspace code generating
> BIDI XDWRITE_READ_10 CDBs.
> 
> Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
> ---
>  drivers/target/target_core_pr.c        |    3 +-
>  drivers/target/target_core_transport.c |  278 +++++++++++++++++++++++++++-----
>  include/target/target_core_base.h      |   10 +-
>  include/target/target_core_transport.h |   10 +-
>  4 files changed, 254 insertions(+), 47 deletions(-)
> 
> diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
> index 4c408cb..141ce48 100644
> --- a/drivers/target/target_core_pr.c
> +++ b/drivers/target/target_core_pr.c
> @@ -497,8 +497,7 @@ static int core_scsi3_pr_seq_non_holder(
>  	 * WRITE_EXCLUSIVE_* reservation.
>  	 */
>  	if ((we) && !(registered_nexus)) {
> -		if ((cmd->data_direction == DMA_TO_DEVICE) ||
> -		    (cmd->data_direction == DMA_BIDIRECTIONAL)) {
> +		if (cmd->data_direction == DMA_TO_DEVICE) {
>  			/*
>  			 * Conflict for write exclusive
>  			 */
> diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
> index 517a59c..e709771 100644
> --- a/drivers/target/target_core_transport.c
> +++ b/drivers/target/target_core_transport.c
> @@ -2529,7 +2529,8 @@ static inline int transport_check_device_cdb_sector_count(
>  static struct se_task *transport_generic_get_task(
>  	struct se_transform_info *ti,
>  	struct se_cmd *cmd,
> -	void *se_obj_ptr)
> +	void *se_obj_ptr,
> +	enum dma_data_direction data_direction)
>  {
>  	struct se_task *task;
>  	struct se_device *dev = SE_DEV(cmd);
> @@ -2625,7 +2626,8 @@ static int transport_process_control_sg_transform(
>  		return -1;
>  	}
>  
> -	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr);
> +	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr,
> +				cmd->data_direction);
>  	if (!(task))
>  		return -1;
>  
> @@ -2665,7 +2667,8 @@ static int transport_process_control_nonsg_transform(
>  	unsigned char *cdb;
>  	struct se_task *task;
>  
> -	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr);
> +	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr,
> +				cmd->data_direction);
>  	if (!(task))
>  		return -1;
>  
> @@ -2699,7 +2702,8 @@ static int transport_process_non_data_transform(
>  	unsigned char *cdb;
>  	struct se_task *task;
>  
> -	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr);
> +	task = cmd->transport_get_task(ti, cmd, ti->se_obj_ptr,
> +				cmd->data_direction);
>  	if (!(task))
>  		return -1;
>  
> @@ -2741,11 +2745,6 @@ struct se_cmd *__transport_alloc_se_cmd(
>  	unsigned char *sense_buffer;
>  	int gfp_type = (in_interrupt()) ? GFP_ATOMIC : GFP_KERNEL;
>  
> -	if (data_direction == DMA_BIDIRECTIONAL) {
> -		printk(KERN_ERR "SCSI BiDirectional mode not supported yet\n");
> -		return ERR_PTR(-ENOSYS);
> -	}
> -
>  	cmd = kmem_cache_zalloc(se_cmd_cache, gfp_type);
>  	if (!(cmd)) {
>  		printk(KERN_ERR "kmem_cache_alloc() failed for se_cmd_cache\n");
> @@ -5183,6 +5182,54 @@ int transport_generic_emulate_request_sense(
>  }
>  EXPORT_SYMBOL(transport_generic_emulate_request_sense);
>  
> +static void transport_xor_callback(struct se_cmd *cmd)
> +{
> +	unsigned char *buf, *addr;
> +	struct se_mem *se_mem;
> +	unsigned int offset;
> +	int i;
> +	/*
> +	 * From sbc3r22.pdf section 5.48 XDWRITEREAD (10) command
> +	 *
> +	 * 1) read the specified logical block(s);
> +	 * 2) transfer logical blocks from the data-out buffer;
> +	 * 3) XOR the logical blocks transferred from the data-out buffer with
> +	 *    the logical blocks read, storing the resulting XOR data in a buffer;
> +	 * 4) if the DISABLE WRITE bit is set to zero, then write the logical
> +	 *    blocks transferred from the data-out buffer; and
> +	 * 5) transfer the resulting XOR data to the data-in buffer.
> +	 */
> +	buf = kmalloc(cmd->data_length, GFP_KERNEL);
> +	if (!(buf)) {
> +		printk(KERN_ERR "Unable to allocate xor_callback buf\n");
> +		return;
> +	}
> +	/*
> +	 * Copy the scatterlist WRITE buffer located at T_TASK(cmd)->t_mem_list
> +	 * into the locally allocated *buf
> +	 */
> +	transport_memcpy_se_mem_read_contig(cmd, buf, T_TASK(cmd)->t_mem_list);
> +	/*
> +	 * Now perform the XOR against the BIDI read memory located at
> +	 * T_TASK(cmd)->t_mem_bidi_list
> +	 */
> +
> +	offset = 0;
> +	list_for_each_entry(se_mem, T_TASK(cmd)->t_mem_bidi_list, se_list) {
> +		addr = (unsigned char *)kmap_atomic(se_mem->se_page, KM_USER0);
> +		if (!(addr))
> +			goto out;
> +
> +		for (i = 0; i < se_mem->se_len; i++)
> +			*(addr + se_mem->se_off + i) ^= *(buf + offset + i);
> +
> +		offset += se_mem->se_len;
> +		kunmap_atomic(addr, KM_USER0);
> +	}
> +out:
> +	kfree(buf);
> +}
> +
>  /*
>   * Used to obtain Sense Data from underlying Linux/SCSI struct scsi_cmnd
>   */
> @@ -5472,6 +5519,26 @@ static int transport_generic_cmd_sequencer(
>  		T_TASK(cmd)->t_tasks_fua = (cdb[1] & 0x8);
>  		ret = TGCS_DATA_SG_IO_CDB;
>  		break;
> +	case XDWRITEREAD_10:
> +		SET_GENERIC_TRANSPORT_FUNCTIONS(cmd);
> +		if ((cmd->data_direction != DMA_TO_DEVICE) ||
> +		    !(T_TASK(cmd)->t_tasks_bidi))
> +			return TGCS_INVALID_CDB_FIELD;
> +		sectors = transport_get_sectors_10(cdb, cmd, &sector_ret);
> +		if (sector_ret)
> +			return TGCS_UNSUPPORTED_CDB;
> +		size = transport_get_size(sectors, cdb, cmd);
> +		transport_dev_get_mem_SG(cmd->se_orig_obj_ptr, cmd);
> +		transport_get_maps(cmd);
> +		cmd->transport_split_cdb = &split_cdb_XX_10;
> +		cmd->transport_get_lba = &transport_lba_32;
> +		/*
> +		 * Setup BIDI XOR callback to be run during transport_generic_complete_ok()
> +		 */
> +		cmd->transport_complete_callback = &transport_xor_callback;
> +		T_TASK(cmd)->t_tasks_fua = (cdb[1] & 0x8);
> +		ret = TGCS_DATA_SG_IO_CDB;
> +		break;
>  	case 0xa3:
>  		SET_GENERIC_TRANSPORT_FUNCTIONS(cmd);
>  		if (TRANSPORT(dev)->get_device_type(dev) != TYPE_ROM) {
> @@ -6096,6 +6163,33 @@ void transport_memcpy_read_contig(
>  }
>  EXPORT_SYMBOL(transport_memcpy_read_contig);
>  
> +void transport_memcpy_se_mem_read_contig(
> +	struct se_cmd *cmd,
> +	unsigned char *dst,
> +	struct list_head *se_mem_list)
> +{
> +	struct se_mem *se_mem;
> +	void *src;
> +	u32 length = 0, total_length = cmd->data_length;
> +
> +	list_for_each_entry(se_mem, se_mem_list, se_list) {
> +		length = se_mem->se_len;
> +
> +		if (length > total_length)
> +			length = total_length;
> +
> +		src = page_address(se_mem->se_page) + se_mem->se_off;
> +
> +		memcpy(dst, src, length);
> +
> +		if (!(total_length -= length))
> +			return;
> +
> +		dst += length;
> +	}
> +}
> +
> +
>  /*     transport_generic_passthrough():
>   *
>   *
> @@ -6249,6 +6343,12 @@ void transport_generic_complete_ok(struct se_cmd *cmd)
>  			return;
>  		}
>  	}
> +	/*
> +	 * Check for a callback, used by amoungst other things
> +	 * XDWRITE_READ_10 emulation.
> +	 */
> +	if (cmd->transport_complete_callback)
> +		cmd->transport_complete_callback(cmd);
>  
>  	switch (cmd->data_direction) {
>  	case DMA_FROM_DEVICE:
> @@ -6267,6 +6367,19 @@ void transport_generic_complete_ok(struct se_cmd *cmd)
>  				cmd->data_length;
>  		}
>  		spin_unlock(&cmd->se_lun->lun_sep_lock);
> +		/*
> +		 * Check if we need to send READ payload for BIDI-COMMAND
> +		 */
> +		if (T_TASK(cmd)->t_mem_bidi_list != NULL) {
> +			spin_lock(&cmd->se_lun->lun_sep_lock);
> +			if (SE_LUN(cmd)->lun_sep) {
> +				SE_LUN(cmd)->lun_sep->sep_stats.tx_data_octets +=
> +					cmd->data_length;
> +			}
> +			spin_unlock(&cmd->se_lun->lun_sep_lock);
> +			CMD_TFO(cmd)->queue_data_in(cmd);
> +			break;

Again this looks fishy to me. How do you know that this command needs a queue_data_in()
now? Are you not assuming XOR operations? can you not move this to transport_xor_callback
above?

> +		}
>  		/* Fall through for DMA_TO_DEVICE */
>  	case DMA_NONE:
>  		CMD_TFO(cmd)->queue_status(cmd);
> @@ -6347,6 +6460,23 @@ static inline void transport_free_pages(struct se_cmd *cmd)
>  		kmem_cache_free(se_mem_cache, se_mem);
>  	}
>  

Again the request for an update tree. (Might be missing some code here)

The above does a break; in bidi case, but are you freeing this here regardless?

> +	if (T_TASK(cmd)->t_mem_bidi_list && T_TASK(cmd)->t_tasks_se_bidi_num) {
> +		list_for_each_entry_safe(se_mem, se_mem_tmp,
> +				T_TASK(cmd)->t_mem_bidi_list, se_list) {
> +			/*
> +			 * We only release call __free_page(struct se_mem->se_page) when
> +			 * SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is NOT in use,
> +			 */
> +			if (free_page)
> +				__free_page(se_mem->se_page);
> +
> +			list_del(&se_mem->se_list);
> +			kmem_cache_free(se_mem_cache, se_mem);
> +		}
> +	}
> +
> +	kfree(T_TASK(cmd)->t_mem_bidi_list);
> +	T_TASK(cmd)->t_mem_bidi_list = NULL;
>  	kfree(T_TASK(cmd)->t_mem_list);
>  	T_TASK(cmd)->t_mem_list = NULL;
>  	T_TASK(cmd)->t_tasks_se_num = 0;
> @@ -6477,7 +6607,9 @@ release_cmd:
>  int transport_generic_map_mem_to_cmd(
>  	struct se_cmd *cmd,
>  	void *mem,
> -	u32 se_mem_num)
> +	u32 se_mem_num,
> +	void *mem_bidi_in,
> +	u32 se_mem_bidi_num)
>  {
>  	u32 se_mem_cnt_out = 0;
>  	int ret;
> @@ -6489,6 +6621,12 @@ int transport_generic_map_mem_to_cmd(
>  	 * struct se_mem elements...
>  	 */
>  	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM)) {
> +		if ((mem_bidi_in) || (se_mem_bidi_num)) {
> +			printk(KERN_ERR "SCF_CMD_PASSTHROUGH_NOALLOC not supported"
> +				" with BIDI-COMMAND\n");
> +			return -ENOSYS;
> +		}
> +
>  		T_TASK(cmd)->t_mem_list = (struct list_head *)mem;
>  		T_TASK(cmd)->t_tasks_se_num = se_mem_num;
>  		cmd->se_cmd_flags |= SCF_CMD_PASSTHROUGH_NOALLOC;
> @@ -6507,14 +6645,35 @@ int transport_generic_map_mem_to_cmd(
>  		 */ 
>  		T_TASK(cmd)->t_mem_list = transport_init_se_mem_list();
>  		if (!(T_TASK(cmd)->t_mem_list))
> -			return -1;
> +			return -ENOMEM;
>  
>  		ret = transport_map_sg_to_mem(cmd,
>  			T_TASK(cmd)->t_mem_list, mem, &se_mem_cnt_out);
>  		if (ret < 0)
> -			return -1;
> +			return -ENOMEM;
>  
>  		T_TASK(cmd)->t_tasks_se_num = se_mem_cnt_out;
> +		/*
> +		 * Setup BIDI READ list of struct se_mem elements
> +		 */
> +		if ((mem_bidi_in) && (se_mem_bidi_num)) {
> +			T_TASK(cmd)->t_mem_bidi_list = transport_init_se_mem_list();
> +			if (!(T_TASK(cmd)->t_mem_bidi_list)) {
> +				kfree(T_TASK(cmd)->t_mem_list);
> +				return -ENOMEM;
> +			}
> +			se_mem_cnt_out = 0;
> +
> +			ret = transport_map_sg_to_mem(cmd,
> +				T_TASK(cmd)->t_mem_bidi_list, mem_bidi_in,
> +				&se_mem_cnt_out);
> +			if (ret < 0) {
> +				kfree(T_TASK(cmd)->t_mem_list);
> +				return -ENOMEM;
> +			}
> +
> +			T_TASK(cmd)->t_tasks_se_bidi_num = se_mem_cnt_out;
> +		}
>  		cmd->se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC;
>  
>  	} else if (cmd->se_cmd_flags & SCF_SCSI_CONTROL_NONSG_IO_CDB) {
> @@ -6610,6 +6769,11 @@ non_scsi_data:
>   */
>  int transport_generic_do_transform(struct se_cmd *cmd, struct se_transform_info *ti)
>  {
> +	if (!(cmd->transport_cdb_transform)) {
> +		dump_stack();
> +		return -1;
> +	}
> +
>  	if (cmd->transport_cdb_transform(cmd, ti) < 0)
>  		return -1;
>  
> @@ -6656,9 +6820,8 @@ int transport_new_cmd_obj(
>  	struct se_transform_info *ti,
>  	int post_execute)
>  {
> -	u32 task_cdbs = 0;
> -	struct se_mem *se_mem_out = NULL;
>  	struct se_device *dev = SE_DEV(cmd);
> +	u32 task_cdbs = 0, rc;
>  
>  	if (!(cmd->se_cmd_flags & SCF_SCSI_DATA_SG_IO_CDB)) {
>  		task_cdbs++;
> @@ -6666,11 +6829,32 @@ int transport_new_cmd_obj(
>  	} else {
>  		ti->ti_set_counts = 1;
>  		ti->ti_dev = dev;
> -
> +		/*
> +		 * Setup any BIDI READ tasks and memory from
> +		 * T_TASK(cmd)->t_mem_bidi_list so the READ struct se_tasks
> +		 * are queued first..
> +		 */
> +		if (T_TASK(cmd)->t_mem_bidi_list != NULL) {
> +			rc = transport_generic_get_cdb_count(cmd, ti,
> +				T_TASK(cmd)->t_task_lba,
> +				T_TASK(cmd)->t_tasks_sectors,
> +				DMA_FROM_DEVICE, T_TASK(cmd)->t_mem_bidi_list);
> +			if (!(rc)) {
> +				cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
> +				cmd->scsi_sense_reason =
> +					TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> +				return PYX_TRANSPORT_LU_COMM_FAILURE;
> +			}
> +			ti->ti_set_counts = 0;
> +		}
> +		/*
> +		 * Setup the tasks and memory from T_TASK(cmd)->t_mem_list
> +		 * Note for BIDI transfers this will contain the WRITE payload
> +		 */
>  		task_cdbs = transport_generic_get_cdb_count(cmd, ti,
>  				T_TASK(cmd)->t_task_lba,
>  				T_TASK(cmd)->t_tasks_sectors,
> -				NULL, &se_mem_out);
> +				cmd->data_direction, T_TASK(cmd)->t_mem_list);
>  		if (!(task_cdbs)) {
>  			cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION;
>  			cmd->scsi_sense_reason =
> @@ -6743,7 +6927,17 @@ int transport_generic_get_mem(struct se_cmd *cmd, u32 length, u32 dma_size)
>  
>  	T_TASK(cmd)->t_mem_list = transport_init_se_mem_list();
>  	if (!(T_TASK(cmd)->t_mem_list))
> -		return -1;
> +		return -ENOMEM;
> +	/*
> +	 * Setup BIDI-COMMAND READ list of struct se_mem elements
> +	 */
> +	if (T_TASK(cmd)->t_tasks_bidi) {
> +		T_TASK(cmd)->t_mem_bidi_list = transport_init_se_mem_list();
> +		if (!(T_TASK(cmd)->t_mem_bidi_list)) {
> +			kfree(T_TASK(cmd)->t_mem_list);
> +			return -ENOMEM;
> +		}
> +	}
>  
>  	while (length) {
>  		se_mem = kmem_cache_zalloc(se_mem_cache, GFP_KERNEL);
> @@ -7240,28 +7434,28 @@ u32 transport_generic_get_cdb_count(
>  	struct se_transform_info *ti,
>  	unsigned long long starting_lba,
>  	u32 sectors,
> -	struct se_mem *se_mem_in,
> -	struct se_mem **se_mem_out)
> +	enum dma_data_direction data_direction,
> +	struct list_head *mem_list)
>  {
>  	unsigned char *cdb = NULL;
>  	struct se_task *task;
> -	struct se_mem *se_mem, *se_mem_lout = NULL;
> +	struct se_mem *se_mem = NULL, *se_mem_lout = NULL;
>  	struct se_device *dev = SE_DEV(cmd);
>  	int max_sectors_set = 0, ret;
>  	u32 task_offset_in = 0, se_mem_cnt = 0, task_cdbs = 0;
>  	unsigned long long lba;
>  
> -	if (!se_mem_in) {
> -		list_for_each_entry(se_mem_in, T_TASK(cmd)->t_mem_list, se_list)
> -			break;
> -
> -		if (!se_mem_in) {
> -			printk(KERN_ERR "se_mem_in is NULL\n");
> -			return 0;
> -		}
> +	if (!mem_list) {
> +		printk(KERN_ERR "mem_list is NULL in transport_generic_get"
> +				"_cdb_count()\n");
> +		return 0;
>  	}
> -	se_mem = se_mem_in;
> -
> +	/*
> +	 * While using RAMDISK_DR backstores is the only case where
> +	 * mem_list will ever be empty at this point.
> +	 */
> +	if (!(list_empty(mem_list)))
> +		se_mem = list_entry(mem_list->next, struct se_mem, se_list);
>  	/*
>  	 * Locate the start volume segment in which the received LBA will be
>  	 * executed upon.
> @@ -7280,7 +7474,12 @@ u32 transport_generic_get_cdb_count(
>  			CMD_TFO(cmd)->get_task_tag(cmd), lba, sectors,
>  			transport_dev_end_lba(dev));
>  
> -		task = cmd->transport_get_task(ti, cmd, dev);
> +		if (!(cmd->transport_get_task)) {
> +			dump_stack();
> +			goto out;
> +		}
> +
> +		task = cmd->transport_get_task(ti, cmd, dev, data_direction);
>  		if (!(task))
>  			goto out;
>  
> @@ -7293,7 +7492,7 @@ u32 transport_generic_get_cdb_count(
>  		task->task_size = (task->task_sectors *
>  				   DEV_ATTRIB(dev)->block_size);
>  		task->transport_map_task = transport_dev_get_map_SG(dev,
> -					cmd->data_direction);
> +					data_direction);
>  
>  		cdb = TRANSPORT(dev)->get_cdb(task);
>  		if ((cdb)) {
> @@ -7306,14 +7505,13 @@ u32 transport_generic_get_cdb_count(
>  		 * Perform the SE OBJ plugin and/or Transport plugin specific
>  		 * mapping for T_TASK(cmd)->t_mem_list.
>  		 */
> -		ret = transport_do_se_mem_map(dev, task,
> -				T_TASK(cmd)->t_mem_list, NULL, se_mem,
> -				&se_mem_lout, &se_mem_cnt, &task_offset_in);
> +		ret = transport_do_se_mem_map(dev, task, mem_list,
> +				NULL, se_mem, &se_mem_lout, &se_mem_cnt,
> +				&task_offset_in);
>  		if (ret < 0)
>  			goto out;
>  
>  		se_mem = se_mem_lout;
> -		*se_mem_out = se_mem_lout;
>  		task_cdbs++;
>  
>  		DEBUG_VOL("Incremented task_cdbs(%u) task->task_sg_num(%u)\n",
> @@ -7333,8 +7531,9 @@ u32 transport_generic_get_cdb_count(
>  		atomic_inc(&T_TASK(cmd)->t_se_count);
>  	}
>  
> -	DEBUG_VOL("ITT[0x%08x] total cdbs(%u)\n",
> -		CMD_TFO(cmd)->get_task_tag(cmd), task_cdbs);
> +	DEBUG_VOL("ITT[0x%08x] total %s cdbs(%u)\n",
> +		CMD_TFO(cmd)->get_task_tag(cmd), (data_direction == DMA_TO_DEVICE)
> +		? "DMA_TO_DEVICE" : "DMA_FROM_DEVICE", task_cdbs);
>  
>  	return task_cdbs;
>  out:
> @@ -8129,8 +8328,7 @@ void transport_send_task_abort(struct se_cmd *cmd)
>  	 * response.  This response with TASK_ABORTED status will be
>  	 * queued back to fabric module by transport_check_aborted_status().
>  	 */
> -	if ((cmd->data_direction == DMA_TO_DEVICE) ||
> -	    (cmd->data_direction == DMA_BIDIRECTIONAL)) {
> +	if (cmd->data_direction == DMA_TO_DEVICE) {
>  		if (CMD_TFO(cmd)->write_pending_status(cmd) != 0) {
>  			atomic_inc(&T_TASK(cmd)->t_transport_aborted);
>  			smp_mb__after_atomic_inc();
> diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
> index e9b98e9..97e9715 100644
> --- a/include/target/target_core_base.h
> +++ b/include/target/target_core_base.h
> @@ -414,11 +414,13 @@ struct se_transport_task {
>  	unsigned long long	t_task_lba;
>  	int			t_tasks_failed;
>  	int			t_tasks_fua;
> +	int			t_tasks_bidi:1;
>  	u32			t_task_cdbs;
>  	u32			t_tasks_check;
>  	u32			t_tasks_no;
>  	u32			t_tasks_sectors;
>  	u32			t_tasks_se_num;
> +	u32			t_tasks_se_bidi_num;
>  	u32			t_tasks_sg_chained_no;
>  	atomic_t		t_fe_count;
>  	atomic_t		t_se_count;
> @@ -447,8 +449,10 @@ struct se_transport_task {
>  	struct scatterlist	t_tasks_sg_bounce;
>  	void			*t_task_buf;
>  	void			*t_task_pt_buf;
> -	struct list_head	t_task_list;
>  	struct list_head	*t_mem_list;
> +	/* Used for BIDI READ */
> +	struct list_head	*t_mem_bidi_list;
> +	struct list_head	t_task_list;
>  } ____cacheline_aligned;
>  
>  struct se_task {
> @@ -598,7 +602,8 @@ struct se_cmd {
>  	u32 (*transport_get_lba)(unsigned char *);
>  	unsigned long long (*transport_get_long_lba)(unsigned char *);
>  	struct se_task *(*transport_get_task)(struct se_transform_info *,
> -					struct se_cmd *, void *);
> +					struct se_cmd *, void *,
> +					enum dma_data_direction);
>  	int (*transport_map_buffers_to_tasks)(struct se_cmd *);
>  	void (*transport_map_SG_segments)(struct se_unmap_sg *);
>  	void (*transport_passthrough_done)(struct se_cmd *);
> @@ -607,6 +612,7 @@ struct se_cmd {
>  					struct se_unmap_sg *);
>  	void (*transport_split_cdb)(unsigned long long, u32 *, unsigned char *);
>  	void (*transport_wait_for_tasks)(struct se_cmd *, int, int);
> +	void (*transport_complete_callback)(struct se_cmd *);
>  	void (*callback)(struct se_cmd *cmd, void *callback_arg,
>  			int complete_status);
>  	void *callback_arg;
> diff --git a/include/target/target_core_transport.h b/include/target/target_core_transport.h
> index ef4c084..20702d7 100644
> --- a/include/target/target_core_transport.h
> +++ b/include/target/target_core_transport.h
> @@ -232,6 +232,8 @@ extern void transport_memcpy_write_contig(struct se_cmd *, struct scatterlist *,
>  				unsigned char *);
>  extern void transport_memcpy_read_contig(struct se_cmd *, unsigned char *,
>  				struct scatterlist *);
> +extern void transport_memcpy_se_mem_read_contig(struct se_cmd *,
> +				unsigned char *, struct list_head *);
>  extern int transport_generic_passthrough_async(struct se_cmd *cmd,
>  				void(*callback)(struct se_cmd *cmd,
>  				void *callback_arg, int complete_status),
> @@ -242,7 +244,8 @@ extern void transport_generic_complete_ok(struct se_cmd *);
>  extern void transport_free_dev_tasks(struct se_cmd *);
>  extern void transport_release_fe_cmd(struct se_cmd *);
>  extern int transport_generic_remove(struct se_cmd *, int, int);
> -extern int transport_generic_map_mem_to_cmd(struct se_cmd *cmd, void *, u32);
> +extern int transport_generic_map_mem_to_cmd(struct se_cmd *cmd, void *, u32,
> +				void *, u32);
>  extern int transport_lun_wait_for_tasks(struct se_cmd *, struct se_lun *);
>  extern int transport_clear_lun_from_sessions(struct se_lun *);
>  extern int transport_check_aborted_status(struct se_cmd *, int);
> @@ -271,8 +274,9 @@ extern int transport_map_mem_to_sg(struct se_task *, struct list_head *,
>  extern void transport_do_task_sg_chain(struct se_cmd *);
>  extern u32 transport_generic_get_cdb_count(struct se_cmd *,
>  					struct se_transform_info *,
> -					unsigned long long, u32, struct se_mem *,
> -					struct se_mem **);
> +					unsigned long long, u32,
> +					enum dma_data_direction,
> +					struct list_head *);
>  extern int transport_generic_new_cmd(struct se_cmd *);
>  extern void transport_generic_process_write(struct se_cmd *);
>  extern int transport_generic_do_tmr(struct se_cmd *);

Thanks in advance
Boaz

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 1/2] tcm: Add support for BIDI-COMMANDS and XDWRITE_READ_10 emulation
  2010-09-19 13:29 ` Boaz Harrosh
@ 2010-09-19 13:35   ` Boaz Harrosh
  2010-09-20  9:02   ` Nicholas A. Bellinger
  1 sibling, 0 replies; 4+ messages in thread
From: Boaz Harrosh @ 2010-09-19 13:35 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: linux-scsi, linux-kernel, FUJITA Tomonori, Mike Christie,
	Hannes Reinecke, James Bottomley, Konrad Rzeszutek Wilk,
	Douglas Gilbert, Joe Eykholt

On 09/19/2010 03:29 PM, Boaz Harrosh wrote:
> On 09/16/2010 12:41 AM, Nicholas A. Bellinger wrote:
>>  		/* Fall through for DMA_TO_DEVICE */
>>  	case DMA_NONE:
>>  		CMD_TFO(cmd)->queue_status(cmd);
>> @@ -6347,6 +6460,23 @@ static inline void transport_free_pages(struct se_cmd *cmd)
>>  		kmem_cache_free(se_mem_cache, se_mem);
>>  	}
>>  
> 
> Again the request for an update tree. (Might be missing some code here)
> 
> The above does a break; in bidi case, but are you freeing this here regardless?
> 

Scratch this out stupid me. That's in another function. Rrrr I need my coffee now

Boaz

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 1/2] tcm: Add support for BIDI-COMMANDS and XDWRITE_READ_10 emulation
  2010-09-19 13:29 ` Boaz Harrosh
  2010-09-19 13:35   ` Boaz Harrosh
@ 2010-09-20  9:02   ` Nicholas A. Bellinger
  1 sibling, 0 replies; 4+ messages in thread
From: Nicholas A. Bellinger @ 2010-09-20  9:02 UTC (permalink / raw)
  To: Boaz Harrosh
  Cc: linux-scsi, linux-kernel, FUJITA Tomonori, Mike Christie,
	Hannes Reinecke, James Bottomley, Konrad Rzeszutek Wilk,
	Douglas Gilbert, Joe Eykholt

On Sun, 2010-09-19 at 15:29 +0200, Boaz Harrosh wrote:
> On 09/16/2010 12:41 AM, Nicholas A. Bellinger wrote:
> > From: Nicholas Bellinger <nab@linux-iscsi.org>
> > 
> 
> Hi dear Nicholas
> 
> I still have a few reservations regarding the use of the:
> +	int			t_tasks_bidi:1;
> 
> at struct se_transport_task at minimum I'd use the t_tasks_se_bidi_num
> as a look ahead. But I hate that as well. I suspect none of this is
> needed. But
> 

Yes, unfortuately AFAICT this is required by the fabric module during
the initial I/O setup phase in order signal TCM Core before the
T_TASK(cmd)->t_mem_bidi_list is setup w/o depending upon
DMA_BIDIRECTIONAL or something else like it..

> At http://git.kernel.org/?p=linux/kernel/git/nab/lio-4.0.git I still get
> an old head without these or the CDB32 stuff.
> 
> Where can I find a git web. With latest bits? I'd like to have a
> closer look.
> 

My 'upstream' v4.0 branch that all of the changes have been going into
is here..

http://git.kernel.org/?p=linux/kernel/git/nab/lio-core-2.6.git;a=shortlog;h=refs/heads/lio-4.0

> (You know that I have a vested interest in all this I need it to be solid)
> 

Many thanks Boaz!

--nab



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-09-20  9:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-15 22:41 [PATCH v2 1/2] tcm: Add support for BIDI-COMMANDS and XDWRITE_READ_10 emulation Nicholas A. Bellinger
2010-09-19 13:29 ` Boaz Harrosh
2010-09-19 13:35   ` Boaz Harrosh
2010-09-20  9:02   ` Nicholas A. Bellinger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).