public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required
       [not found] <1429972410-7146-1-git-send-email-akinobu.mita@gmail.com>
@ 2015-04-25 14:33 ` Akinobu Mita
  2015-04-26  9:26   ` Sagi Grimberg
  2015-04-25 14:33 ` [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory Akinobu Mita
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Akinobu Mita @ 2015-04-25 14:33 UTC (permalink / raw)
  To: target-devel
  Cc: Akinobu Mita, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi

Even if the device backend is initialized with protection info is
enabled, some requests don't have the protection info attached for
WRITE SAME command issued by block device helpers, WRITE command with
WRPROTECT=0 by SG_IO ioctl, etc.

So when TCM loopback fabric module is used, se_cmd->t_prot_sg is NULL
for these requests and performing WRITE_INSERT of PI using software
emulation by sbc_dif_generate() causes kernel crash.

To fix this, introduce SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC for
se_cmd_flags, which is used to determine that se_cmd->t_prot_sg needs
to be allocated or use pre-allocated protection information by scsi
mid-layer.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: target-devel@vger.kernel.org
Cc: linux-scsi@vger.kernel.org
---
* No change from v2

 drivers/target/target_core_transport.c | 30 ++++++++++++++++++------------
 include/target/target_core_base.h      |  1 +
 2 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 7a9e7e2..fe52883 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1450,6 +1450,7 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
 	if (sgl_prot_count) {
 		se_cmd->t_prot_sg = sgl_prot;
 		se_cmd->t_prot_nents = sgl_prot_count;
+		se_cmd->se_cmd_flags |= SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC;
 	}
 
 	/*
@@ -2181,6 +2182,12 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
 
 static inline void transport_free_pages(struct se_cmd *cmd)
 {
+	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
+		transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
+		cmd->t_prot_sg = NULL;
+		cmd->t_prot_nents = 0;
+	}
+
 	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
 		/*
 		 * Release special case READ buffer payload required for
@@ -2204,10 +2211,6 @@ static inline void transport_free_pages(struct se_cmd *cmd)
 	transport_free_sgl(cmd->t_bidi_data_sg, cmd->t_bidi_data_nents);
 	cmd->t_bidi_data_sg = NULL;
 	cmd->t_bidi_data_nents = 0;
-
-	transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
-	cmd->t_prot_sg = NULL;
-	cmd->t_prot_nents = 0;
 }
 
 /**
@@ -2346,6 +2349,17 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 	int ret = 0;
 	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
 
+	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
+		if (cmd->prot_op != TARGET_PROT_NORMAL) {
+			ret = target_alloc_sgl(&cmd->t_prot_sg,
+					       &cmd->t_prot_nents,
+					       cmd->prot_length, true);
+			if (ret < 0)
+				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+		}
+
+	}
+
 	/*
 	 * Determine is the TCM fabric module has already allocated physical
 	 * memory, and is directly calling transport_generic_map_mem_to_cmd()
@@ -2371,14 +2385,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
 				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 		}
 
-		if (cmd->prot_op != TARGET_PROT_NORMAL) {
-			ret = target_alloc_sgl(&cmd->t_prot_sg,
-					       &cmd->t_prot_nents,
-					       cmd->prot_length, true);
-			if (ret < 0)
-				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-		}
-
 		ret = target_alloc_sgl(&cmd->t_data_sg, &cmd->t_data_nents,
 				       cmd->data_length, zero_flag);
 		if (ret < 0)
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 480e9f8..13efcdd 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -167,6 +167,7 @@ enum se_cmd_flags_table {
 	SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC = 0x00020000,
 	SCF_COMPARE_AND_WRITE		= 0x00080000,
 	SCF_COMPARE_AND_WRITE_POST	= 0x00100000,
+	SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC = 0x00200000,
 };
 
 /* struct se_dev_entry->lun_flags and struct se_lun->lun_access */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory
       [not found] <1429972410-7146-1-git-send-email-akinobu.mita@gmail.com>
  2015-04-25 14:33 ` [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required Akinobu Mita
@ 2015-04-25 14:33 ` Akinobu Mita
  2015-04-26 10:07   ` Sagi Grimberg
  2015-04-25 14:33 ` [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME Akinobu Mita
  2015-04-25 14:33 ` [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled Akinobu Mita
  3 siblings, 1 reply; 14+ messages in thread
From: Akinobu Mita @ 2015-04-25 14:33 UTC (permalink / raw)
  To: target-devel
  Cc: Akinobu Mita, Tim Chen, Herbert Xu, David S. Miller, linux-crypto,
	Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley, linux-scsi

sbc_dif_generate() and sbc_dif_verify() currently assume that each
SG element for data transfer memory doesn't straddle the block size
boundary.

However, when using SG_IO ioctl, we can choose the data transfer
memory which doesn't satisfy that alignment requirement.

In order to handle such cases correctly, this change inverts the outer
loop to iterate data transfer memory and the inner loop to iterate
protection information and enables to calculate CRC for a block which
straddles multiple SG elements.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: target-devel@vger.kernel.org
Cc: linux-scsi@vger.kernel.org
---
* Changes from v2:
- Handle odd SG mapping correctly instead of giving up

 drivers/target/target_core_sbc.c | 108 +++++++++++++++++++++++++--------------
 1 file changed, 69 insertions(+), 39 deletions(-)

diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index edba39f..33d2426 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -1182,27 +1182,43 @@ sbc_dif_generate(struct se_cmd *cmd)
 {
 	struct se_device *dev = cmd->se_dev;
 	struct se_dif_v1_tuple *sdt;
-	struct scatterlist *dsg, *psg = cmd->t_prot_sg;
+	struct scatterlist *dsg = cmd->t_data_sg, *psg;
 	sector_t sector = cmd->t_task_lba;
 	void *daddr, *paddr;
 	int i, j, offset = 0;
+	unsigned int block_size = dev->dev_attrib.block_size;
 
-	for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) {
-		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+	for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
+		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 
-		for (j = 0; j < dsg->length; j += dev->dev_attrib.block_size) {
+		for (j = 0; j < psg->length;
+				j += sizeof(struct se_dif_v1_tuple)) {
+			__u16 crc = 0;
+			unsigned int avail;
 
-			if (offset >= psg->length) {
-				kunmap_atomic(paddr);
-				psg = sg_next(psg);
-				paddr = kmap_atomic(sg_page(psg)) + psg->offset;
-				offset = 0;
+			if (offset >= dsg->length) {
+				offset -= dsg->length;
+				kunmap_atomic(daddr);
+				dsg = sg_next(dsg);
+				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 			}
 
-			sdt = paddr + offset;
-			sdt->guard_tag = cpu_to_be16(crc_t10dif(daddr + j,
-						dev->dev_attrib.block_size));
+			sdt = paddr + j;
+
+			avail = min(block_size, dsg->length - offset);
+			crc = crc_t10dif(daddr + offset, avail);
+			if (avail < block_size) {
+				kunmap_atomic(daddr);
+				dsg = sg_next(dsg);
+				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+				offset = block_size - avail;
+				crc = crc_t10dif_update(crc, daddr, offset);
+			} else {
+				offset += block_size;
+			}
+
+			sdt->guard_tag = cpu_to_be16(crc);
 			if (cmd->prot_type == TARGET_DIF_TYPE1_PROT)
 				sdt->ref_tag = cpu_to_be32(sector & 0xffffffff);
 			sdt->app_tag = 0;
@@ -1215,26 +1231,23 @@ sbc_dif_generate(struct se_cmd *cmd)
 				 be32_to_cpu(sdt->ref_tag));
 
 			sector++;
-			offset += sizeof(struct se_dif_v1_tuple);
 		}
 
-		kunmap_atomic(paddr);
 		kunmap_atomic(daddr);
+		kunmap_atomic(paddr);
 	}
 }
 
 static sense_reason_t
 sbc_dif_v1_verify(struct se_cmd *cmd, struct se_dif_v1_tuple *sdt,
-		  const void *p, sector_t sector, unsigned int ei_lba)
+		  __u16 crc, sector_t sector, unsigned int ei_lba)
 {
-	struct se_device *dev = cmd->se_dev;
-	int block_size = dev->dev_attrib.block_size;
 	__be16 csum;
 
 	if (!(cmd->prot_checks & TARGET_DIF_CHECK_GUARD))
 		goto check_ref;
 
-	csum = cpu_to_be16(crc_t10dif(p, block_size));
+	csum = cpu_to_be16(crc);
 
 	if (sdt->guard_tag != csum) {
 		pr_err("DIFv1 checksum failed on sector %llu guard tag 0x%04x"
@@ -1316,26 +1329,32 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 {
 	struct se_device *dev = cmd->se_dev;
 	struct se_dif_v1_tuple *sdt;
-	struct scatterlist *dsg;
+	struct scatterlist *dsg = cmd->t_data_sg;
 	sector_t sector = start;
 	void *daddr, *paddr;
-	int i, j;
+	int i;
 	sense_reason_t rc;
+	int dsg_off = 0;
+	unsigned int block_size = dev->dev_attrib.block_size;
 
-	for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) {
-		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+	for (; psg && sector < start + sectors; psg = sg_next(psg)) {
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
+		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 
-		for (j = 0; j < dsg->length; j += dev->dev_attrib.block_size) {
-
-			if (psg_off >= psg->length) {
-				kunmap_atomic(paddr - psg->offset);
-				psg = sg_next(psg);
-				paddr = kmap_atomic(sg_page(psg)) + psg->offset;
-				psg_off = 0;
+		for (i = psg_off; i < psg->length &&
+				sector < start + sectors;
+				i += sizeof(struct se_dif_v1_tuple)) {
+			__u16 crc;
+			unsigned int avail;
+
+			if (dsg_off >= dsg->length) {
+				dsg_off -= dsg->length;
+				kunmap_atomic(daddr);
+				dsg = sg_next(dsg);
+				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 			}
 
-			sdt = paddr + psg_off;
+			sdt = paddr + i;
 
 			pr_debug("DIF READ sector: %llu guard_tag: 0x%04x"
 				 " app_tag: 0x%04x ref_tag: %u\n",
@@ -1343,27 +1362,38 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 				 sdt->app_tag, be32_to_cpu(sdt->ref_tag));
 
 			if (sdt->app_tag == cpu_to_be16(0xffff)) {
-				sector++;
-				psg_off += sizeof(struct se_dif_v1_tuple);
-				continue;
+				dsg_off += block_size;
+				goto next;
+			}
+
+			avail = min(block_size, dsg->length - dsg_off);
+
+			crc = crc_t10dif(daddr + dsg_off, avail);
+			if (avail < block_size) {
+				kunmap_atomic(daddr);
+				dsg = sg_next(dsg);
+				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
+				dsg_off = block_size - avail;
+				crc = crc_t10dif_update(crc, daddr, dsg_off);
+			} else {
+				dsg_off += block_size;
 			}
 
-			rc = sbc_dif_v1_verify(cmd, sdt, daddr + j, sector,
-					       ei_lba);
+			rc = sbc_dif_v1_verify(cmd, sdt, crc, sector, ei_lba);
 			if (rc) {
-				kunmap_atomic(paddr - psg->offset);
 				kunmap_atomic(daddr - dsg->offset);
+				kunmap_atomic(paddr - psg->offset);
 				cmd->bad_sector = sector;
 				return rc;
 			}
-
+next:
 			sector++;
 			ei_lba++;
-			psg_off += sizeof(struct se_dif_v1_tuple);
 		}
 
-		kunmap_atomic(paddr - psg->offset);
+		psg_off = 0;
 		kunmap_atomic(daddr - dsg->offset);
+		kunmap_atomic(paddr - psg->offset);
 	}
 
 	return 0;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME
       [not found] <1429972410-7146-1-git-send-email-akinobu.mita@gmail.com>
  2015-04-25 14:33 ` [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required Akinobu Mita
  2015-04-25 14:33 ` [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory Akinobu Mita
@ 2015-04-25 14:33 ` Akinobu Mita
  2015-04-26  9:53   ` Sagi Grimberg
  2015-04-25 14:33 ` [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled Akinobu Mita
  3 siblings, 1 reply; 14+ messages in thread
From: Akinobu Mita @ 2015-04-25 14:33 UTC (permalink / raw)
  To: target-devel
  Cc: Akinobu Mita, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi

For WRITE SAME, data transfer memory only contains a single block but
protection information is required for all blocks that are written by
the command.

This makes sbc_dif_generate() and sbc_dif_verify() work for WRITE_SAME.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: target-devel@vger.kernel.org
Cc: linux-scsi@vger.kernel.org
---
* Changes from v2:
- Handle odd SG mapping correctly instead of giving up

 drivers/target/target_core_sbc.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 33d2426..10c7cb9 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -1177,6 +1177,24 @@ err:
 }
 EXPORT_SYMBOL(sbc_execute_unmap);
 
+static bool sbc_is_write_same(struct se_cmd *cmd)
+{
+	u16 service_action;
+
+	switch (cmd->t_task_cdb[0]) {
+	case WRITE_SAME:
+	case WRITE_SAME_16:
+		return true;
+	case VARIABLE_LENGTH_CMD:
+		service_action = get_unaligned_be16(&cmd->t_task_cdb[8]);
+		if (service_action == WRITE_SAME_32)
+			return true;
+		break;
+	}
+
+	return false;
+}
+
 void
 sbc_dif_generate(struct se_cmd *cmd)
 {
@@ -1187,6 +1205,7 @@ sbc_dif_generate(struct se_cmd *cmd)
 	void *daddr, *paddr;
 	int i, j, offset = 0;
 	unsigned int block_size = dev->dev_attrib.block_size;
+	bool is_write_same = sbc_is_write_same(cmd);
 
 	for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
@@ -1201,6 +1220,8 @@ sbc_dif_generate(struct se_cmd *cmd)
 				offset -= dsg->length;
 				kunmap_atomic(daddr);
 				dsg = sg_next(dsg);
+				if (!dsg && is_write_same)
+					dsg = cmd->t_data_sg;
 				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 			}
 
@@ -1211,6 +1232,8 @@ sbc_dif_generate(struct se_cmd *cmd)
 			if (avail < block_size) {
 				kunmap_atomic(daddr);
 				dsg = sg_next(dsg);
+				if (!dsg && is_write_same)
+					dsg = cmd->t_data_sg;
 				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 				offset = block_size - avail;
 				crc = crc_t10dif_update(crc, daddr, offset);
@@ -1336,6 +1359,7 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 	sense_reason_t rc;
 	int dsg_off = 0;
 	unsigned int block_size = dev->dev_attrib.block_size;
+	bool is_write_same = sbc_is_write_same(cmd);
 
 	for (; psg && sector < start + sectors; psg = sg_next(psg)) {
 		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
@@ -1351,6 +1375,8 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 				dsg_off -= dsg->length;
 				kunmap_atomic(daddr);
 				dsg = sg_next(dsg);
+				if (!dsg && is_write_same)
+					dsg = cmd->t_data_sg;
 				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 			}
 
@@ -1372,6 +1398,8 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
 			if (avail < block_size) {
 				kunmap_atomic(daddr);
 				dsg = sg_next(dsg);
+				if (!dsg && is_write_same)
+					dsg = cmd->t_data_sg;
 				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
 				dsg_off = block_size - avail;
 				crc = crc_t10dif_update(crc, daddr, dsg_off);
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled
       [not found] <1429972410-7146-1-git-send-email-akinobu.mita@gmail.com>
                   ` (2 preceding siblings ...)
  2015-04-25 14:33 ` [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME Akinobu Mita
@ 2015-04-25 14:33 ` Akinobu Mita
  2015-04-26  9:58   ` Sagi Grimberg
  3 siblings, 1 reply; 14+ messages in thread
From: Akinobu Mita @ 2015-04-25 14:33 UTC (permalink / raw)
  To: target-devel
  Cc: Akinobu Mita, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi

Now we can generate correct PI for WRITE SAME command, so it is
unnecessary to disallow WRITE SAME when protection info is enabled.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: target-devel@vger.kernel.org
Cc: linux-scsi@vger.kernel.org
---
* No change from v2

 drivers/target/target_core_file.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index 829817a..fe98f58 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -376,16 +376,12 @@ fd_execute_write_same(struct se_cmd *cmd)
 	struct bio_vec *bvec;
 	unsigned int len = 0, i;
 	ssize_t ret;
+	sense_reason_t rc;
 
 	if (!nolb) {
 		target_complete_cmd(cmd, SAM_STAT_GOOD);
 		return 0;
 	}
-	if (cmd->prot_op) {
-		pr_err("WRITE_SAME: Protection information with FILEIO"
-		       " backends not supported\n");
-		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-	}
 
 	if (cmd->t_data_nents > 1 ||
 	    cmd->t_data_sg[0].length != cmd->se_dev->dev_attrib.block_size) {
@@ -397,6 +393,10 @@ fd_execute_write_same(struct se_cmd *cmd)
 		return TCM_INVALID_CDB_FIELD;
 	}
 
+	rc = sbc_dif_verify(cmd, cmd->t_task_lba, nolb, 0, cmd->t_prot_sg, 0);
+	if (rc)
+		return rc;
+
 	bvec = kcalloc(nolb, sizeof(struct bio_vec), GFP_KERNEL);
 	if (!bvec)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -418,6 +418,14 @@ fd_execute_write_same(struct se_cmd *cmd)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	}
 
+	if (cmd->prot_op) {
+		ret = fd_do_rw(cmd, fd_dev->fd_prot_file, se_dev->prot_length,
+				cmd->t_prot_sg, cmd->t_prot_nents,
+				cmd->prot_length, 1);
+		if (ret < 0)
+			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+	}
+
 	target_complete_cmd(cmd, SAM_STAT_GOOD);
 	return 0;
 }
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required
  2015-04-25 14:33 ` [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required Akinobu Mita
@ 2015-04-26  9:26   ` Sagi Grimberg
  2015-04-26  9:44     ` Sagi Grimberg
  0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2015-04-26  9:26 UTC (permalink / raw)
  To: Akinobu Mita, target-devel
  Cc: Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley, linux-scsi

On 4/25/2015 5:33 PM, Akinobu Mita wrote:
> Even if the device backend is initialized with protection info is
> enabled, some requests don't have the protection info attached for
> WRITE SAME command issued by block device helpers, WRITE command with
> WRPROTECT=0 by SG_IO ioctl, etc.
>
> So when TCM loopback fabric module is used, se_cmd->t_prot_sg is NULL
> for these requests and performing WRITE_INSERT of PI using software
> emulation by sbc_dif_generate() causes kernel crash.
>
> To fix this, introduce SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC for
> se_cmd_flags, which is used to determine that se_cmd->t_prot_sg needs
> to be allocated or use pre-allocated protection information by scsi
> mid-layer.
>
> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
> Cc: Sagi Grimberg <sagig@mellanox.com>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
> Cc: target-devel@vger.kernel.org
> Cc: linux-scsi@vger.kernel.org
> ---
> * No change from v2
>
>   drivers/target/target_core_transport.c | 30 ++++++++++++++++++------------
>   include/target/target_core_base.h      |  1 +
>   2 files changed, 19 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
> index 7a9e7e2..fe52883 100644
> --- a/drivers/target/target_core_transport.c
> +++ b/drivers/target/target_core_transport.c
> @@ -1450,6 +1450,7 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
>   	if (sgl_prot_count) {
>   		se_cmd->t_prot_sg = sgl_prot;
>   		se_cmd->t_prot_nents = sgl_prot_count;
> +		se_cmd->se_cmd_flags |= SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC;
>   	}
>
>   	/*
> @@ -2181,6 +2182,12 @@ static inline void transport_reset_sgl_orig(struct se_cmd *cmd)
>
>   static inline void transport_free_pages(struct se_cmd *cmd)
>   {
> +	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
> +		transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
> +		cmd->t_prot_sg = NULL;
> +		cmd->t_prot_nents = 0;
> +	}
> +

Hi Akinobu,

Any reason why this changed it's location to the start of the function?

>   	if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
>   		/*
>   		 * Release special case READ buffer payload required for
> @@ -2204,10 +2211,6 @@ static inline void transport_free_pages(struct se_cmd *cmd)
>   	transport_free_sgl(cmd->t_bidi_data_sg, cmd->t_bidi_data_nents);
>   	cmd->t_bidi_data_sg = NULL;
>   	cmd->t_bidi_data_nents = 0;
> -
> -	transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
> -	cmd->t_prot_sg = NULL;
> -	cmd->t_prot_nents = 0;
>   }
>
>   /**
> @@ -2346,6 +2349,17 @@ transport_generic_new_cmd(struct se_cmd *cmd)
>   	int ret = 0;
>   	bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
>
> +	if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
> +		if (cmd->prot_op != TARGET_PROT_NORMAL) {

This seems wrong,

What will happen for transports that will actually to allocate
protection SGLs? The allocation is unreachable since
SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC is not set...

I'd say this needs to be:

if (cmd->prot_op != TARGET_PROT_NORMAL &&
     !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {

> +			ret = target_alloc_sgl(&cmd->t_prot_sg,
> +					       &cmd->t_prot_nents,
> +					       cmd->prot_length, true);
> +			if (ret < 0)
> +				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> +		}
> +
> +	}
> +
>   	/*
>   	 * Determine is the TCM fabric module has already allocated physical
>   	 * memory, and is directly calling transport_generic_map_mem_to_cmd()
> @@ -2371,14 +2385,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
>   				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>   		}
>
> -		if (cmd->prot_op != TARGET_PROT_NORMAL) {
> -			ret = target_alloc_sgl(&cmd->t_prot_sg,
> -					       &cmd->t_prot_nents,
> -					       cmd->prot_length, true);
> -			if (ret < 0)
> -				return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> -		}
> -
>   		ret = target_alloc_sgl(&cmd->t_data_sg, &cmd->t_data_nents,
>   				       cmd->data_length, zero_flag);
>   		if (ret < 0)
> diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
> index 480e9f8..13efcdd 100644
> --- a/include/target/target_core_base.h
> +++ b/include/target/target_core_base.h
> @@ -167,6 +167,7 @@ enum se_cmd_flags_table {
>   	SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC = 0x00020000,
>   	SCF_COMPARE_AND_WRITE		= 0x00080000,
>   	SCF_COMPARE_AND_WRITE_POST	= 0x00100000,
> +	SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC = 0x00200000,
>   };
>
>   /* struct se_dev_entry->lun_flags and struct se_lun->lun_access */
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required
  2015-04-26  9:26   ` Sagi Grimberg
@ 2015-04-26  9:44     ` Sagi Grimberg
  2015-04-27 12:57       ` Akinobu Mita
  0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2015-04-26  9:44 UTC (permalink / raw)
  To: Akinobu Mita, target-devel
  Cc: Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley, linux-scsi

On 4/26/2015 12:26 PM, Sagi Grimberg wrote:
> On 4/25/2015 5:33 PM, Akinobu Mita wrote:
>> Even if the device backend is initialized with protection info is
>> enabled, some requests don't have the protection info attached for
>> WRITE SAME command issued by block device helpers, WRITE command with
>> WRPROTECT=0 by SG_IO ioctl, etc.
>>
>> So when TCM loopback fabric module is used, se_cmd->t_prot_sg is NULL
>> for these requests and performing WRITE_INSERT of PI using software
>> emulation by sbc_dif_generate() causes kernel crash.
>>
>> To fix this, introduce SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC for
>> se_cmd_flags, which is used to determine that se_cmd->t_prot_sg needs
>> to be allocated or use pre-allocated protection information by scsi
>> mid-layer.
>>
>> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
>> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
>> Cc: Sagi Grimberg <sagig@mellanox.com>
>> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
>> Cc: target-devel@vger.kernel.org
>> Cc: linux-scsi@vger.kernel.org
>> ---
>> * No change from v2
>>
>>   drivers/target/target_core_transport.c | 30
>> ++++++++++++++++++------------
>>   include/target/target_core_base.h      |  1 +
>>   2 files changed, 19 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/target/target_core_transport.c
>> b/drivers/target/target_core_transport.c
>> index 7a9e7e2..fe52883 100644
>> --- a/drivers/target/target_core_transport.c
>> +++ b/drivers/target/target_core_transport.c
>> @@ -1450,6 +1450,7 @@ int target_submit_cmd_map_sgls(struct se_cmd
>> *se_cmd, struct se_session *se_sess
>>       if (sgl_prot_count) {
>>           se_cmd->t_prot_sg = sgl_prot;
>>           se_cmd->t_prot_nents = sgl_prot_count;
>> +        se_cmd->se_cmd_flags |= SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC;
>>       }
>>
>>       /*
>> @@ -2181,6 +2182,12 @@ static inline void
>> transport_reset_sgl_orig(struct se_cmd *cmd)
>>
>>   static inline void transport_free_pages(struct se_cmd *cmd)
>>   {
>> +    if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>> +        transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
>> +        cmd->t_prot_sg = NULL;
>> +        cmd->t_prot_nents = 0;
>> +    }
>> +
>
> Hi Akinobu,
>
> Any reason why this changed it's location to the start of the function?
>
>>       if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
>>           /*
>>            * Release special case READ buffer payload required for
>> @@ -2204,10 +2211,6 @@ static inline void transport_free_pages(struct
>> se_cmd *cmd)
>>       transport_free_sgl(cmd->t_bidi_data_sg, cmd->t_bidi_data_nents);
>>       cmd->t_bidi_data_sg = NULL;
>>       cmd->t_bidi_data_nents = 0;
>> -
>> -    transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
>> -    cmd->t_prot_sg = NULL;
>> -    cmd->t_prot_nents = 0;
>>   }
>>
>>   /**
>> @@ -2346,6 +2349,17 @@ transport_generic_new_cmd(struct se_cmd *cmd)
>>       int ret = 0;
>>       bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
>>
>> +    if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>> +        if (cmd->prot_op != TARGET_PROT_NORMAL) {
>
> This seems wrong,
>
> What will happen for transports that will actually to allocate
> protection SGLs? The allocation is unreachable since
> SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC is not set...

Umm, actually this is reachable... But I still think the condition
should be the other way around (saving a condition in some common
cases).

>
> I'd say this needs to be:
>
> if (cmd->prot_op != TARGET_PROT_NORMAL &&
>      !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>
>> +            ret = target_alloc_sgl(&cmd->t_prot_sg,
>> +                           &cmd->t_prot_nents,
>> +                           cmd->prot_length, true);
>> +            if (ret < 0)
>> +                return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>> +        }
>> +
>> +    }
>> +
>>       /*
>>        * Determine is the TCM fabric module has already allocated
>> physical
>>        * memory, and is directly calling
>> transport_generic_map_mem_to_cmd()
>> @@ -2371,14 +2385,6 @@ transport_generic_new_cmd(struct se_cmd *cmd)
>>                   return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>>           }
>>
>> -        if (cmd->prot_op != TARGET_PROT_NORMAL) {
>> -            ret = target_alloc_sgl(&cmd->t_prot_sg,
>> -                           &cmd->t_prot_nents,
>> -                           cmd->prot_length, true);
>> -            if (ret < 0)
>> -                return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>> -        }
>> -
>>           ret = target_alloc_sgl(&cmd->t_data_sg, &cmd->t_data_nents,
>>                          cmd->data_length, zero_flag);
>>           if (ret < 0)
>> diff --git a/include/target/target_core_base.h
>> b/include/target/target_core_base.h
>> index 480e9f8..13efcdd 100644
>> --- a/include/target/target_core_base.h
>> +++ b/include/target/target_core_base.h
>> @@ -167,6 +167,7 @@ enum se_cmd_flags_table {
>>       SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC = 0x00020000,
>>       SCF_COMPARE_AND_WRITE        = 0x00080000,
>>       SCF_COMPARE_AND_WRITE_POST    = 0x00100000,
>> +    SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC = 0x00200000,
>>   };
>>
>>   /* struct se_dev_entry->lun_flags and struct se_lun->lun_access */
>>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME
  2015-04-25 14:33 ` [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME Akinobu Mita
@ 2015-04-26  9:53   ` Sagi Grimberg
  2015-04-27 12:58     ` Akinobu Mita
  0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2015-04-26  9:53 UTC (permalink / raw)
  To: Akinobu Mita, target-devel
  Cc: Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley, linux-scsi

On 4/25/2015 5:33 PM, Akinobu Mita wrote:
> For WRITE SAME, data transfer memory only contains a single block but
> protection information is required for all blocks that are written by
> the command.
>
> This makes sbc_dif_generate() and sbc_dif_verify() work for WRITE_SAME.

This feels a bit like an overshoot...

You only have 1 block, is it really a good idea to calculate
the CRC over and over for write same? Wouldn't it be better to
have a really simple sbc_dif_generate_same() that calculates the
block CRC once and uses it for the entire payload (and watches for
Type 1 to increment the ref-tag)?

>
> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
> Cc: Sagi Grimberg <sagig@mellanox.com>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
> Cc: target-devel@vger.kernel.org
> Cc: linux-scsi@vger.kernel.org
> ---
> * Changes from v2:
> - Handle odd SG mapping correctly instead of giving up
>
>   drivers/target/target_core_sbc.c | 28 ++++++++++++++++++++++++++++
>   1 file changed, 28 insertions(+)
>
> diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
> index 33d2426..10c7cb9 100644
> --- a/drivers/target/target_core_sbc.c
> +++ b/drivers/target/target_core_sbc.c
> @@ -1177,6 +1177,24 @@ err:
>   }
>   EXPORT_SYMBOL(sbc_execute_unmap);
>
> +static bool sbc_is_write_same(struct se_cmd *cmd)
> +{
> +	u16 service_action;
> +
> +	switch (cmd->t_task_cdb[0]) {
> +	case WRITE_SAME:
> +	case WRITE_SAME_16:
> +		return true;
> +	case VARIABLE_LENGTH_CMD:
> +		service_action = get_unaligned_be16(&cmd->t_task_cdb[8]);
> +		if (service_action == WRITE_SAME_32)
> +			return true;
> +		break;
> +	}
> +
> +	return false;
> +}
> +
>   void
>   sbc_dif_generate(struct se_cmd *cmd)
>   {
> @@ -1187,6 +1205,7 @@ sbc_dif_generate(struct se_cmd *cmd)
>   	void *daddr, *paddr;
>   	int i, j, offset = 0;
>   	unsigned int block_size = dev->dev_attrib.block_size;
> +	bool is_write_same = sbc_is_write_same(cmd);
>
>   	for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
>   		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
> @@ -1201,6 +1220,8 @@ sbc_dif_generate(struct se_cmd *cmd)
>   				offset -= dsg->length;
>   				kunmap_atomic(daddr);
>   				dsg = sg_next(dsg);
> +				if (!dsg && is_write_same)
> +					dsg = cmd->t_data_sg;
>   				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>   			}
>
> @@ -1211,6 +1232,8 @@ sbc_dif_generate(struct se_cmd *cmd)
>   			if (avail < block_size) {
>   				kunmap_atomic(daddr);
>   				dsg = sg_next(dsg);
> +				if (!dsg && is_write_same)
> +					dsg = cmd->t_data_sg;
>   				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>   				offset = block_size - avail;
>   				crc = crc_t10dif_update(crc, daddr, offset);
> @@ -1336,6 +1359,7 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
>   	sense_reason_t rc;
>   	int dsg_off = 0;
>   	unsigned int block_size = dev->dev_attrib.block_size;
> +	bool is_write_same = sbc_is_write_same(cmd);
>
>   	for (; psg && sector < start + sectors; psg = sg_next(psg)) {
>   		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
> @@ -1351,6 +1375,8 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
>   				dsg_off -= dsg->length;
>   				kunmap_atomic(daddr);
>   				dsg = sg_next(dsg);
> +				if (!dsg && is_write_same)
> +					dsg = cmd->t_data_sg;
>   				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>   			}
>
> @@ -1372,6 +1398,8 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
>   			if (avail < block_size) {
>   				kunmap_atomic(daddr);
>   				dsg = sg_next(dsg);
> +				if (!dsg && is_write_same)
> +					dsg = cmd->t_data_sg;
>   				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>   				dsg_off = block_size - avail;
>   				crc = crc_t10dif_update(crc, daddr, dsg_off);
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled
  2015-04-25 14:33 ` [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled Akinobu Mita
@ 2015-04-26  9:58   ` Sagi Grimberg
  2015-04-27 13:02     ` Akinobu Mita
  0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2015-04-26  9:58 UTC (permalink / raw)
  To: Akinobu Mita, target-devel
  Cc: Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley, linux-scsi

On 4/25/2015 5:33 PM, Akinobu Mita wrote:
> Now we can generate correct PI for WRITE SAME command, so it is
> unnecessary to disallow WRITE SAME when protection info is enabled.
>
> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
> Cc: Sagi Grimberg <sagig@mellanox.com>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
> Cc: target-devel@vger.kernel.org
> Cc: linux-scsi@vger.kernel.org
> ---
> * No change from v2
>
>   drivers/target/target_core_file.c | 18 +++++++++++++-----
>   1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
> index 829817a..fe98f58 100644
> --- a/drivers/target/target_core_file.c
> +++ b/drivers/target/target_core_file.c
> @@ -376,16 +376,12 @@ fd_execute_write_same(struct se_cmd *cmd)
>   	struct bio_vec *bvec;
>   	unsigned int len = 0, i;
>   	ssize_t ret;
> +	sense_reason_t rc;
>
>   	if (!nolb) {
>   		target_complete_cmd(cmd, SAM_STAT_GOOD);
>   		return 0;
>   	}
> -	if (cmd->prot_op) {
> -		pr_err("WRITE_SAME: Protection information with FILEIO"
> -		       " backends not supported\n");
> -		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> -	}
>
>   	if (cmd->t_data_nents > 1 ||
>   	    cmd->t_data_sg[0].length != cmd->se_dev->dev_attrib.block_size) {
> @@ -397,6 +393,10 @@ fd_execute_write_same(struct se_cmd *cmd)
>   		return TCM_INVALID_CDB_FIELD;
>   	}
>
> +	rc = sbc_dif_verify(cmd, cmd->t_task_lba, nolb, 0, cmd->t_prot_sg, 0);
> +	if (rc)
> +		return rc;
> +
>   	bvec = kcalloc(nolb, sizeof(struct bio_vec), GFP_KERNEL);
>   	if (!bvec)
>   		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> @@ -418,6 +418,14 @@ fd_execute_write_same(struct se_cmd *cmd)
>   		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>   	}
>
> +	if (cmd->prot_op) {
> +		ret = fd_do_rw(cmd, fd_dev->fd_prot_file, se_dev->prot_length,
> +				cmd->t_prot_sg, cmd->t_prot_nents,
> +				cmd->prot_length, 1);
> +		if (ret < 0)
> +			return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
> +	}
> +
>   	target_complete_cmd(cmd, SAM_STAT_GOOD);
>   	return 0;
>   }
>

This looks good,

iblock is needed too though. I think you just need a missing call to
iblock_alloc_bip() and you're good to go (you can use scsi_debug with
dif/dix to test it). I think it belongs in the same patch.

Sagi.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory
  2015-04-25 14:33 ` [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory Akinobu Mita
@ 2015-04-26 10:07   ` Sagi Grimberg
  2015-04-27 13:03     ` Akinobu Mita
  0 siblings, 1 reply; 14+ messages in thread
From: Sagi Grimberg @ 2015-04-26 10:07 UTC (permalink / raw)
  To: Akinobu Mita, target-devel
  Cc: Tim Chen, Herbert Xu, David S. Miller, linux-crypto,
	Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley, linux-scsi

On 4/25/2015 5:33 PM, Akinobu Mita wrote:
> sbc_dif_generate() and sbc_dif_verify() currently assume that each
> SG element for data transfer memory doesn't straddle the block size
> boundary.
>
> However, when using SG_IO ioctl, we can choose the data transfer
> memory which doesn't satisfy that alignment requirement.
>
> In order to handle such cases correctly, this change inverts the outer
> loop to iterate data transfer memory and the inner loop to iterate
> protection information and enables to calculate CRC for a block which
> straddles multiple SG elements.
>
> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Herbert Xu <herbert@gondor.apana.org.au>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: linux-crypto@vger.kernel.org
> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
> Cc: Sagi Grimberg <sagig@mellanox.com>
> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
> Cc: target-devel@vger.kernel.org
> Cc: linux-scsi@vger.kernel.org
> ---
> * Changes from v2:
> - Handle odd SG mapping correctly instead of giving up
>
>   drivers/target/target_core_sbc.c | 108 +++++++++++++++++++++++++--------------
>   1 file changed, 69 insertions(+), 39 deletions(-)
>
> diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
> index edba39f..33d2426 100644
> --- a/drivers/target/target_core_sbc.c
> +++ b/drivers/target/target_core_sbc.c
> @@ -1182,27 +1182,43 @@ sbc_dif_generate(struct se_cmd *cmd)
>   {
>   	struct se_device *dev = cmd->se_dev;
>   	struct se_dif_v1_tuple *sdt;
> -	struct scatterlist *dsg, *psg = cmd->t_prot_sg;
> +	struct scatterlist *dsg = cmd->t_data_sg, *psg;
>   	sector_t sector = cmd->t_task_lba;
>   	void *daddr, *paddr;
>   	int i, j, offset = 0;
> +	unsigned int block_size = dev->dev_attrib.block_size;
>
> -	for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) {
> -		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
> +	for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
>   		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
> +		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>
> -		for (j = 0; j < dsg->length; j += dev->dev_attrib.block_size) {
> +		for (j = 0; j < psg->length;
> +				j += sizeof(struct se_dif_v1_tuple)) {
> +			__u16 crc = 0;
> +			unsigned int avail;
>
> -			if (offset >= psg->length) {
> -				kunmap_atomic(paddr);
> -				psg = sg_next(psg);
> -				paddr = kmap_atomic(sg_page(psg)) + psg->offset;
> -				offset = 0;
> +			if (offset >= dsg->length) {
> +				offset -= dsg->length;
> +				kunmap_atomic(daddr);

This unmap is inconsistent. You need to unmap (daddr - dsg->offset).

This applies throughout the patch.

> +				dsg = sg_next(dsg);
> +				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>   			}
>
> -			sdt = paddr + offset;
> -			sdt->guard_tag = cpu_to_be16(crc_t10dif(daddr + j,
> -						dev->dev_attrib.block_size));
> +			sdt = paddr + j;
> +
> +			avail = min(block_size, dsg->length - offset);
> +			crc = crc_t10dif(daddr + offset, avail);
> +			if (avail < block_size) {
> +				kunmap_atomic(daddr);
> +				dsg = sg_next(dsg);
> +				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
> +				offset = block_size - avail;
> +				crc = crc_t10dif_update(crc, daddr, offset);
> +			} else {
> +				offset += block_size;
> +			}
> +
> +			sdt->guard_tag = cpu_to_be16(crc);
>   			if (cmd->prot_type == TARGET_DIF_TYPE1_PROT)
>   				sdt->ref_tag = cpu_to_be32(sector & 0xffffffff);
>   			sdt->app_tag = 0;
> @@ -1215,26 +1231,23 @@ sbc_dif_generate(struct se_cmd *cmd)
>   				 be32_to_cpu(sdt->ref_tag));
>
>   			sector++;
> -			offset += sizeof(struct se_dif_v1_tuple);
>   		}
>
> -		kunmap_atomic(paddr);
>   		kunmap_atomic(daddr);
> +		kunmap_atomic(paddr);
>   	}
>   }
>
>   static sense_reason_t
>   sbc_dif_v1_verify(struct se_cmd *cmd, struct se_dif_v1_tuple *sdt,
> -		  const void *p, sector_t sector, unsigned int ei_lba)
> +		  __u16 crc, sector_t sector, unsigned int ei_lba)
>   {
> -	struct se_device *dev = cmd->se_dev;
> -	int block_size = dev->dev_attrib.block_size;
>   	__be16 csum;
>
>   	if (!(cmd->prot_checks & TARGET_DIF_CHECK_GUARD))
>   		goto check_ref;
>
> -	csum = cpu_to_be16(crc_t10dif(p, block_size));
> +	csum = cpu_to_be16(crc);
>
>   	if (sdt->guard_tag != csum) {
>   		pr_err("DIFv1 checksum failed on sector %llu guard tag 0x%04x"
> @@ -1316,26 +1329,32 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
>   {
>   	struct se_device *dev = cmd->se_dev;
>   	struct se_dif_v1_tuple *sdt;
> -	struct scatterlist *dsg;
> +	struct scatterlist *dsg = cmd->t_data_sg;
>   	sector_t sector = start;
>   	void *daddr, *paddr;
> -	int i, j;
> +	int i;
>   	sense_reason_t rc;
> +	int dsg_off = 0;
> +	unsigned int block_size = dev->dev_attrib.block_size;
>
> -	for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) {
> -		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
> +	for (; psg && sector < start + sectors; psg = sg_next(psg)) {
>   		paddr = kmap_atomic(sg_page(psg)) + psg->offset;
> +		daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>
> -		for (j = 0; j < dsg->length; j += dev->dev_attrib.block_size) {
> -
> -			if (psg_off >= psg->length) {
> -				kunmap_atomic(paddr - psg->offset);
> -				psg = sg_next(psg);
> -				paddr = kmap_atomic(sg_page(psg)) + psg->offset;
> -				psg_off = 0;
> +		for (i = psg_off; i < psg->length &&
> +				sector < start + sectors;
> +				i += sizeof(struct se_dif_v1_tuple)) {
> +			__u16 crc;
> +			unsigned int avail;
> +
> +			if (dsg_off >= dsg->length) {
> +				dsg_off -= dsg->length;
> +				kunmap_atomic(daddr);
> +				dsg = sg_next(dsg);
> +				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>   			}
>
> -			sdt = paddr + psg_off;
> +			sdt = paddr + i;
>
>   			pr_debug("DIF READ sector: %llu guard_tag: 0x%04x"
>   				 " app_tag: 0x%04x ref_tag: %u\n",
> @@ -1343,27 +1362,38 @@ sbc_dif_verify(struct se_cmd *cmd, sector_t start, unsigned int sectors,
>   				 sdt->app_tag, be32_to_cpu(sdt->ref_tag));
>
>   			if (sdt->app_tag == cpu_to_be16(0xffff)) {
> -				sector++;
> -				psg_off += sizeof(struct se_dif_v1_tuple);
> -				continue;
> +				dsg_off += block_size;
> +				goto next;
> +			}
> +
> +			avail = min(block_size, dsg->length - dsg_off);
> +
> +			crc = crc_t10dif(daddr + dsg_off, avail);
> +			if (avail < block_size) {
> +				kunmap_atomic(daddr);
> +				dsg = sg_next(dsg);
> +				daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
> +				dsg_off = block_size - avail;
> +				crc = crc_t10dif_update(crc, daddr, dsg_off);
> +			} else {
> +				dsg_off += block_size;
>   			}
>
> -			rc = sbc_dif_v1_verify(cmd, sdt, daddr + j, sector,
> -					       ei_lba);
> +			rc = sbc_dif_v1_verify(cmd, sdt, crc, sector, ei_lba);
>   			if (rc) {
> -				kunmap_atomic(paddr - psg->offset);
>   				kunmap_atomic(daddr - dsg->offset);
> +				kunmap_atomic(paddr - psg->offset);
>   				cmd->bad_sector = sector;
>   				return rc;
>   			}
> -
> +next:
>   			sector++;
>   			ei_lba++;
> -			psg_off += sizeof(struct se_dif_v1_tuple);
>   		}
>
> -		kunmap_atomic(paddr - psg->offset);
> +		psg_off = 0;
>   		kunmap_atomic(daddr - dsg->offset);
> +		kunmap_atomic(paddr - psg->offset);
>   	}
>
>   	return 0;
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required
  2015-04-26  9:44     ` Sagi Grimberg
@ 2015-04-27 12:57       ` Akinobu Mita
  2015-04-27 15:08         ` Sagi Grimberg
  0 siblings, 1 reply; 14+ messages in thread
From: Akinobu Mita @ 2015-04-27 12:57 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: target-devel, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi@vger.kernel.org

2015-04-26 18:44 GMT+09:00 Sagi Grimberg <sagig@dev.mellanox.co.il>:
>>> @@ -2181,6 +2182,12 @@ static inline void
>>> transport_reset_sgl_orig(struct se_cmd *cmd)
>>>
>>>   static inline void transport_free_pages(struct se_cmd *cmd)
>>>   {
>>> +    if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>>> +        transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
>>> +        cmd->t_prot_sg = NULL;
>>> +        cmd->t_prot_nents = 0;
>>> +    }
>>> +
>>
>>
>> Hi Akinobu,
>>
>> Any reason why this changed it's location to the start of the function?

Because when SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is set, it will not
reach the tail of the function.  So when SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC
is cleared and SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC is set,
se_cmd->t_prot_sg leaks.

>>>       if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
>>>           /*
>>>            * Release special case READ buffer payload required for
>>> @@ -2204,10 +2211,6 @@ static inline void transport_free_pages(struct
>>> se_cmd *cmd)
>>>       transport_free_sgl(cmd->t_bidi_data_sg, cmd->t_bidi_data_nents);
>>>       cmd->t_bidi_data_sg = NULL;
>>>       cmd->t_bidi_data_nents = 0;
>>> -
>>> -    transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
>>> -    cmd->t_prot_sg = NULL;
>>> -    cmd->t_prot_nents = 0;
>>>   }
>>>
>>>   /**
>>> @@ -2346,6 +2349,17 @@ transport_generic_new_cmd(struct se_cmd *cmd)
>>>       int ret = 0;
>>>       bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
>>>
>>> +    if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>>> +        if (cmd->prot_op != TARGET_PROT_NORMAL) {
>>
>>
>> This seems wrong,
>>
>> What will happen for transports that will actually to allocate
>> protection SGLs? The allocation is unreachable since
>> SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC is not set...
>
>
> Umm, actually this is reachable... But I still think the condition
> should be the other way around (saving a condition in some common
> cases).

Do you mean you prefer below?

if (cmd->prot_op != TARGET_PROT_NORMAL &&
    !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
...

>>
>> I'd say this needs to be:
>>
>> if (cmd->prot_op != TARGET_PROT_NORMAL &&
>>      !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME
  2015-04-26  9:53   ` Sagi Grimberg
@ 2015-04-27 12:58     ` Akinobu Mita
  0 siblings, 0 replies; 14+ messages in thread
From: Akinobu Mita @ 2015-04-27 12:58 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: target-devel, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi@vger.kernel.org

2015-04-26 18:53 GMT+09:00 Sagi Grimberg <sagig@dev.mellanox.co.il>:
> On 4/25/2015 5:33 PM, Akinobu Mita wrote:
>>
>> For WRITE SAME, data transfer memory only contains a single block but
>> protection information is required for all blocks that are written by
>> the command.
>>
>> This makes sbc_dif_generate() and sbc_dif_verify() work for WRITE_SAME.
>
>
> This feels a bit like an overshoot...
>
> You only have 1 block, is it really a good idea to calculate
> the CRC over and over for write same? Wouldn't it be better to
> have a really simple sbc_dif_generate_same() that calculates the
> block CRC once and uses it for the entire payload (and watches for
> Type 1 to increment the ref-tag)?

Sounds good.  I'll take the idea.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled
  2015-04-26  9:58   ` Sagi Grimberg
@ 2015-04-27 13:02     ` Akinobu Mita
  0 siblings, 0 replies; 14+ messages in thread
From: Akinobu Mita @ 2015-04-27 13:02 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: target-devel, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi@vger.kernel.org

2015-04-26 18:58 GMT+09:00 Sagi Grimberg <sagig@dev.mellanox.co.il>:
> On 4/25/2015 5:33 PM, Akinobu Mita wrote:
>>
>> Now we can generate correct PI for WRITE SAME command, so it is
>> unnecessary to disallow WRITE SAME when protection info is enabled.
>>
>> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
>> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
>> Cc: Sagi Grimberg <sagig@mellanox.com>
>> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
>> Cc: target-devel@vger.kernel.org
>> Cc: linux-scsi@vger.kernel.org
>> ---
>> * No change from v2
>>
>>   drivers/target/target_core_file.c | 18 +++++++++++++-----
>>   1 file changed, 13 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/target/target_core_file.c
>> b/drivers/target/target_core_file.c
>> index 829817a..fe98f58 100644
>> --- a/drivers/target/target_core_file.c
>> +++ b/drivers/target/target_core_file.c
>> @@ -376,16 +376,12 @@ fd_execute_write_same(struct se_cmd *cmd)
>>         struct bio_vec *bvec;
>>         unsigned int len = 0, i;
>>         ssize_t ret;
>> +       sense_reason_t rc;
>>
>>         if (!nolb) {
>>                 target_complete_cmd(cmd, SAM_STAT_GOOD);
>>                 return 0;
>>         }
>> -       if (cmd->prot_op) {
>> -               pr_err("WRITE_SAME: Protection information with FILEIO"
>> -                      " backends not supported\n");
>> -               return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>> -       }
>>
>>         if (cmd->t_data_nents > 1 ||
>>             cmd->t_data_sg[0].length !=
>> cmd->se_dev->dev_attrib.block_size) {
>> @@ -397,6 +393,10 @@ fd_execute_write_same(struct se_cmd *cmd)
>>                 return TCM_INVALID_CDB_FIELD;
>>         }
>>
>> +       rc = sbc_dif_verify(cmd, cmd->t_task_lba, nolb, 0, cmd->t_prot_sg,
>> 0);
>> +       if (rc)
>> +               return rc;
>> +
>>         bvec = kcalloc(nolb, sizeof(struct bio_vec), GFP_KERNEL);
>>         if (!bvec)
>>                 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>> @@ -418,6 +418,14 @@ fd_execute_write_same(struct se_cmd *cmd)
>>                 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>>         }
>>
>> +       if (cmd->prot_op) {
>> +               ret = fd_do_rw(cmd, fd_dev->fd_prot_file,
>> se_dev->prot_length,
>> +                               cmd->t_prot_sg, cmd->t_prot_nents,
>> +                               cmd->prot_length, 1);
>> +               if (ret < 0)
>> +                       return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
>> +       }
>> +
>>         target_complete_cmd(cmd, SAM_STAT_GOOD);
>>         return 0;
>>   }
>>
>
> This looks good,

As you pointed out in the other mail, this change doesn't work with
a real HW fabric because it doesn't generate multiple same protection
fields for a single data block currently.

So I'm considering dropping this from this patch series for now.

> iblock is needed too though. I think you just need a missing call to
> iblock_alloc_bip() and you're good to go (you can use scsi_debug with
> dif/dix to test it). I think it belongs in the same patch.

Thanks for the information.  I'll take a look.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory
  2015-04-26 10:07   ` Sagi Grimberg
@ 2015-04-27 13:03     ` Akinobu Mita
  0 siblings, 0 replies; 14+ messages in thread
From: Akinobu Mita @ 2015-04-27 13:03 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: target-devel, Tim Chen, Herbert Xu, David S. Miller, linux-crypto,
	Nicholas Bellinger, Sagi Grimberg, Martin K. Petersen,
	Christoph Hellwig, James E.J. Bottomley,
	linux-scsi@vger.kernel.org

2015-04-26 19:07 GMT+09:00 Sagi Grimberg <sagig@dev.mellanox.co.il>:
> On 4/25/2015 5:33 PM, Akinobu Mita wrote:
>>
>> sbc_dif_generate() and sbc_dif_verify() currently assume that each
>> SG element for data transfer memory doesn't straddle the block size
>> boundary.
>>
>> However, when using SG_IO ioctl, we can choose the data transfer
>> memory which doesn't satisfy that alignment requirement.
>>
>> In order to handle such cases correctly, this change inverts the outer
>> loop to iterate data transfer memory and the inner loop to iterate
>> protection information and enables to calculate CRC for a block which
>> straddles multiple SG elements.
>>
>> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
>> Cc: Tim Chen <tim.c.chen@linux.intel.com>
>> Cc: Herbert Xu <herbert@gondor.apana.org.au>
>> Cc: "David S. Miller" <davem@davemloft.net>
>> Cc: linux-crypto@vger.kernel.org
>> Cc: Nicholas Bellinger <nab@linux-iscsi.org>
>> Cc: Sagi Grimberg <sagig@mellanox.com>
>> Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
>> Cc: target-devel@vger.kernel.org
>> Cc: linux-scsi@vger.kernel.org
>> ---
>> * Changes from v2:
>> - Handle odd SG mapping correctly instead of giving up
>>
>>   drivers/target/target_core_sbc.c | 108
>> +++++++++++++++++++++++++--------------
>>   1 file changed, 69 insertions(+), 39 deletions(-)
>>
>> diff --git a/drivers/target/target_core_sbc.c
>> b/drivers/target/target_core_sbc.c
>> index edba39f..33d2426 100644
>> --- a/drivers/target/target_core_sbc.c
>> +++ b/drivers/target/target_core_sbc.c
>> @@ -1182,27 +1182,43 @@ sbc_dif_generate(struct se_cmd *cmd)
>>   {
>>         struct se_device *dev = cmd->se_dev;
>>         struct se_dif_v1_tuple *sdt;
>> -       struct scatterlist *dsg, *psg = cmd->t_prot_sg;
>> +       struct scatterlist *dsg = cmd->t_data_sg, *psg;
>>         sector_t sector = cmd->t_task_lba;
>>         void *daddr, *paddr;
>>         int i, j, offset = 0;
>> +       unsigned int block_size = dev->dev_attrib.block_size;
>>
>> -       for_each_sg(cmd->t_data_sg, dsg, cmd->t_data_nents, i) {
>> -               daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>> +       for_each_sg(cmd->t_prot_sg, psg, cmd->t_prot_nents, i) {
>>                 paddr = kmap_atomic(sg_page(psg)) + psg->offset;
>> +               daddr = kmap_atomic(sg_page(dsg)) + dsg->offset;
>>
>> -               for (j = 0; j < dsg->length; j +=
>> dev->dev_attrib.block_size) {
>> +               for (j = 0; j < psg->length;
>> +                               j += sizeof(struct se_dif_v1_tuple)) {
>> +                       __u16 crc = 0;
>> +                       unsigned int avail;
>>
>> -                       if (offset >= psg->length) {
>> -                               kunmap_atomic(paddr);
>> -                               psg = sg_next(psg);
>> -                               paddr = kmap_atomic(sg_page(psg)) +
>> psg->offset;
>> -                               offset = 0;
>> +                       if (offset >= dsg->length) {
>> +                               offset -= dsg->length;
>> +                               kunmap_atomic(daddr);
>
>
> This unmap is inconsistent. You need to unmap (daddr - dsg->offset).
>
> This applies throughout the patch.

Thanks for pointing out.  I'll fix them all.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required
  2015-04-27 12:57       ` Akinobu Mita
@ 2015-04-27 15:08         ` Sagi Grimberg
  0 siblings, 0 replies; 14+ messages in thread
From: Sagi Grimberg @ 2015-04-27 15:08 UTC (permalink / raw)
  To: Akinobu Mita
  Cc: target-devel, Nicholas Bellinger, Sagi Grimberg,
	Martin K. Petersen, Christoph Hellwig, James E.J. Bottomley,
	linux-scsi@vger.kernel.org

On 4/27/2015 3:57 PM, Akinobu Mita wrote:
> 2015-04-26 18:44 GMT+09:00 Sagi Grimberg <sagig@dev.mellanox.co.il>:
>>>> @@ -2181,6 +2182,12 @@ static inline void
>>>> transport_reset_sgl_orig(struct se_cmd *cmd)
>>>>
>>>>    static inline void transport_free_pages(struct se_cmd *cmd)
>>>>    {
>>>> +    if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>>>> +        transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
>>>> +        cmd->t_prot_sg = NULL;
>>>> +        cmd->t_prot_nents = 0;
>>>> +    }
>>>> +
>>>
>>>
>>> Hi Akinobu,
>>>
>>> Any reason why this changed it's location to the start of the function?
>
> Because when SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is set, it will not
> reach the tail of the function.  So when SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC
> is cleared and SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC is set,
> se_cmd->t_prot_sg leaks.

I see. That's fine...

>
>>>>        if (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC) {
>>>>            /*
>>>>             * Release special case READ buffer payload required for
>>>> @@ -2204,10 +2211,6 @@ static inline void transport_free_pages(struct
>>>> se_cmd *cmd)
>>>>        transport_free_sgl(cmd->t_bidi_data_sg, cmd->t_bidi_data_nents);
>>>>        cmd->t_bidi_data_sg = NULL;
>>>>        cmd->t_bidi_data_nents = 0;
>>>> -
>>>> -    transport_free_sgl(cmd->t_prot_sg, cmd->t_prot_nents);
>>>> -    cmd->t_prot_sg = NULL;
>>>> -    cmd->t_prot_nents = 0;
>>>>    }
>>>>
>>>>    /**
>>>> @@ -2346,6 +2349,17 @@ transport_generic_new_cmd(struct se_cmd *cmd)
>>>>        int ret = 0;
>>>>        bool zero_flag = !(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB);
>>>>
>>>> +    if (!(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
>>>> +        if (cmd->prot_op != TARGET_PROT_NORMAL) {
>>>
>>>
>>> This seems wrong,
>>>
>>> What will happen for transports that will actually to allocate
>>> protection SGLs? The allocation is unreachable since
>>> SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC is not set...
>>
>>
>> Umm, actually this is reachable... But I still think the condition
>> should be the other way around (saving a condition in some common
>> cases).
>
> Do you mean you prefer below?
>
> if (cmd->prot_op != TARGET_PROT_NORMAL &&
>      !(cmd->se_cmd_flags & SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC)) {
> ...
>

I think it will be better.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-04-27 15:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1429972410-7146-1-git-send-email-akinobu.mita@gmail.com>
2015-04-25 14:33 ` [PATCH v3 1/5] target: ensure se_cmd->t_prot_sg is allocated when required Akinobu Mita
2015-04-26  9:26   ` Sagi Grimberg
2015-04-26  9:44     ` Sagi Grimberg
2015-04-27 12:57       ` Akinobu Mita
2015-04-27 15:08         ` Sagi Grimberg
2015-04-25 14:33 ` [PATCH v3 3/5] target: handle odd SG mapping for data transfer memory Akinobu Mita
2015-04-26 10:07   ` Sagi Grimberg
2015-04-27 13:03     ` Akinobu Mita
2015-04-25 14:33 ` [PATCH v3 4/5] target: Fix sbc_dif_generate() and sbc_dif_verify() for WRITE SAME Akinobu Mita
2015-04-26  9:53   ` Sagi Grimberg
2015-04-27 12:58     ` Akinobu Mita
2015-04-25 14:33 ` [PATCH v3 5/5] target/file: enable WRITE SAME when protection info is enabled Akinobu Mita
2015-04-26  9:58   ` Sagi Grimberg
2015-04-27 13:02     ` Akinobu Mita

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox