linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large
@ 2013-07-26 17:08 Marcelo Cerri
  2013-07-26 17:08 ` [PATCH 1/2] drivers/crypto/nx: fix physical addresses added to sg lists Marcelo Cerri
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Marcelo Cerri @ 2013-07-26 17:08 UTC (permalink / raw)
  To: benh; +Cc: linux-kernel, linux-crypto, Marcelo Cerri

This series of patches fixes two bugs that are triggered when the input data is
too large. The first one is caused by the miscalculation of physical addresses
and the second one by some limits that the co-processor has to the input data.

Marcelo Cerri (2):
  drivers/crypto/nx: fix physical addresses added to sg lists
  drivers/crypto/nx: fix limits to sg lists for SHA-2

 drivers/crypto/nx/nx-sha256.c | 108 +++++++++++++++++++++++-----------------
 drivers/crypto/nx/nx-sha512.c | 113 ++++++++++++++++++++++++------------------
 drivers/crypto/nx/nx.c        |  22 ++++++--
 3 files changed, 148 insertions(+), 95 deletions(-)

-- 
1.7.12


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] drivers/crypto/nx: fix physical addresses added to sg lists
  2013-07-26 17:08 [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Marcelo Cerri
@ 2013-07-26 17:08 ` Marcelo Cerri
  2013-07-26 17:08 ` [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2 Marcelo Cerri
  2013-08-01  9:26 ` [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Benjamin Herrenschmidt
  2 siblings, 0 replies; 8+ messages in thread
From: Marcelo Cerri @ 2013-07-26 17:08 UTC (permalink / raw)
  To: benh
  Cc: linux-kernel, linux-crypto, Marcelo Cerri, Fionnuala Gunter,
	Joel Schopp, Joy Latten

The co-processor receives data to be hashed through scatter/gather lists
pointing to physical addresses. When a vmalloc'ed data is given, the
driver must calculate the physical address to each page of the data.

However the current version of it just calculates the physical address
once and keeps incrementing it even when a page boundary is crossed.
This patch fixes this behaviour.

Signed-off-by: Fionnuala Gunter <fin@linux.vnet.ibm.com>
Signed-off-by: Joel Schopp <jschopp@linux.vnet.ibm.com>
Signed-off-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c
index bbdab6e..ad07dc6 100644
--- a/drivers/crypto/nx/nx.c
+++ b/drivers/crypto/nx/nx.c
@@ -114,13 +114,29 @@ struct nx_sg *nx_build_sg_list(struct nx_sg *sg_head,
 	 * have been described (or @sgmax elements have been written), the
 	 * loop ends. min_t is used to ensure @end_addr falls on the same page
 	 * as sg_addr, if not, we need to create another nx_sg element for the
-	 * data on the next page */
+	 * data on the next page.
+	 *
+	 * Also when using vmalloc'ed data, every time that a system page
+	 * boundary is crossed the physical address needs to be re-calculated.
+	 */
 	for (sg = sg_head; sg_len < len; sg++) {
+		u64 next_page;
+
 		sg->addr = sg_addr;
-		sg_addr = min_t(u64, NX_PAGE_NUM(sg_addr + NX_PAGE_SIZE), end_addr);
-		sg->len = sg_addr - sg->addr;
+		sg_addr = min_t(u64, NX_PAGE_NUM(sg_addr + NX_PAGE_SIZE),
+				end_addr);
+
+		next_page = (sg->addr & PAGE_MASK) + PAGE_SIZE;
+		sg->len = min_t(u64, sg_addr, next_page) - sg->addr;
 		sg_len += sg->len;
 
+		if (sg_addr >= next_page &&
+				is_vmalloc_addr(start_addr + sg_len)) {
+			sg_addr = page_to_phys(vmalloc_to_page(
+						start_addr + sg_len));
+			end_addr = sg_addr + len - sg_len;
+		}
+
 		if ((sg - sg_head) == sgmax) {
 			pr_err("nx: scatter/gather list overflow, pid: %d\n",
 			       current->pid);
-- 
1.7.12


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2
  2013-07-26 17:08 [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Marcelo Cerri
  2013-07-26 17:08 ` [PATCH 1/2] drivers/crypto/nx: fix physical addresses added to sg lists Marcelo Cerri
@ 2013-07-26 17:08 ` Marcelo Cerri
  2013-07-26 22:29   ` Benjamin Herrenschmidt
  2013-07-26 22:31   ` Benjamin Herrenschmidt
  2013-08-01  9:26 ` [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Benjamin Herrenschmidt
  2 siblings, 2 replies; 8+ messages in thread
From: Marcelo Cerri @ 2013-07-26 17:08 UTC (permalink / raw)
  To: benh
  Cc: linux-kernel, linux-crypto, Marcelo Cerri, Fionnuala Gunter,
	Joel Schopp, Joy Latten

The co-processor has several limits regarding the length of
scatter/gather lists and the total number of bytes in it. These limits
are available in the device tree, as following:

 - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
   list.

 - "ibm,max-sync-cop": used for synchronous operations, it is an array
   of structures that contains information regarding the limits that
   must be considered for each mode and operation. The most important
   limits in it are:
   	- The total number of bytes that a scatter/gather list can hold.
	- The maximum number of elements that a scatter/gather list can
	  have.

This patch updates the NX driver to perform several hyper calls if
needed in order to always respect the length limits for scatter/gather
lists.

Signed-off-by: Fionnuala Gunter <fin@linux.vnet.ibm.com>
Signed-off-by: Joel Schopp <jschopp@linux.vnet.ibm.com>
Signed-off-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-sha256.c | 108 +++++++++++++++++++++++-----------------
 drivers/crypto/nx/nx-sha512.c | 113 ++++++++++++++++++++++++------------------
 2 files changed, 129 insertions(+), 92 deletions(-)

diff --git a/drivers/crypto/nx/nx-sha256.c b/drivers/crypto/nx/nx-sha256.c
index 67024f2..254b01a 100644
--- a/drivers/crypto/nx/nx-sha256.c
+++ b/drivers/crypto/nx/nx-sha256.c
@@ -55,70 +55,86 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data,
 	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
 	struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
 	struct nx_sg *in_sg;
-	u64 to_process, leftover;
+	u64 to_process, leftover, total;
+	u32 max_sg_len;
 	int rc = 0;
 
-	if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
-		/* we've hit the nx chip previously and we're updating again,
-		 * so copy over the partial digest */
-		memcpy(csbcpb->cpb.sha256.input_partial_digest,
-		       csbcpb->cpb.sha256.message_digest, SHA256_DIGEST_SIZE);
-	}
-
 	/* 2 cases for total data len:
-	 *  1: <= SHA256_BLOCK_SIZE: copy into state, return 0
-	 *  2: > SHA256_BLOCK_SIZE: process X blocks, copy in leftover
+	 *  1: < SHA256_BLOCK_SIZE: copy into state, return 0
+	 *  2: >= SHA256_BLOCK_SIZE: process X blocks, copy in leftover
 	 */
-	if (len + sctx->count < SHA256_BLOCK_SIZE) {
+	total = sctx->count + len;
+	if (total < SHA256_BLOCK_SIZE) {
 		memcpy(sctx->buf + sctx->count, data, len);
 		sctx->count += len;
 		goto out;
 	}
 
-	/* to_process: the SHA256_BLOCK_SIZE data chunk to process in this
-	 * update */
-	to_process = (sctx->count + len) & ~(SHA256_BLOCK_SIZE - 1);
-	leftover = (sctx->count + len) & (SHA256_BLOCK_SIZE - 1);
+	in_sg = nx_ctx->in_sg;
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
-	if (sctx->count) {
-		in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buf,
-					 sctx->count, nx_ctx->ap->sglen);
-		in_sg = nx_build_sg_list(in_sg, (u8 *)data,
+	do {
+		/*
+		 * to_process: the SHA256_BLOCK_SIZE data chunk to process in
+		 * this update. This value is also restricted by the sg list
+		 * limits.
+		 */
+		to_process = min_t(u64, total, nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+		to_process = to_process & ~(SHA256_BLOCK_SIZE - 1);
+		leftover = total - to_process;
+
+		if (sctx->count) {
+			in_sg = nx_build_sg_list(nx_ctx->in_sg,
+						 (u8 *) sctx->buf,
+						 sctx->count, max_sg_len);
+		}
+		in_sg = nx_build_sg_list(in_sg, (u8 *) data,
 					 to_process - sctx->count,
-					 nx_ctx->ap->sglen);
+					 max_sg_len);
 		nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
 					sizeof(struct nx_sg);
-	} else {
-		in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)data,
-					 to_process, nx_ctx->ap->sglen);
-		nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
-					sizeof(struct nx_sg);
-	}
 
-	NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
+			/*
+			 * we've hit the nx chip previously and we're updating
+			 * again, so copy over the partial digest.
+			 */
+			memcpy(csbcpb->cpb.sha256.input_partial_digest,
+			       csbcpb->cpb.sha256.message_digest,
+			       SHA256_DIGEST_SIZE);
+		}
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
+			rc = -EINVAL;
+			goto out;
+		}
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
 
-	if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
-		rc = -EINVAL;
-		goto out;
-	}
+		atomic_inc(&(nx_ctx->stats->sha256_ops));
+		csbcpb->cpb.sha256.message_bit_length += (u64)
+			(csbcpb->cpb.sha256.spbc * 8);
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		/* everything after the first update is continuation */
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 
-	atomic_inc(&(nx_ctx->stats->sha256_ops));
+		total -= to_process;
+		data += to_process;
+		sctx->count = 0;
+		in_sg = nx_ctx->in_sg;
+	} while (leftover >= SHA256_BLOCK_SIZE);
 
 	/* copy the leftover back into the state struct */
 	if (leftover)
-		memcpy(sctx->buf, data + len - leftover, leftover);
+		memcpy(sctx->buf, data, leftover);
 	sctx->count = leftover;
-
-	csbcpb->cpb.sha256.message_bit_length += (u64)
-		(csbcpb->cpb.sha256.spbc * 8);
-
-	/* everything after the first update is continuation */
-	NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 out:
 	return rc;
 }
@@ -129,8 +145,10 @@ static int nx_sha256_final(struct shash_desc *desc, u8 *out)
 	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
 	struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
 	struct nx_sg *in_sg, *out_sg;
+	u32 max_sg_len;
 	int rc;
 
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len, nx_ctx->ap->sglen);
 
 	if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
 		/* we've hit the nx chip previously, now we're finalizing,
@@ -146,9 +164,9 @@ static int nx_sha256_final(struct shash_desc *desc, u8 *out)
 	csbcpb->cpb.sha256.message_bit_length += (u64)(sctx->count * 8);
 
 	in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buf,
-				 sctx->count, nx_ctx->ap->sglen);
+				 sctx->count, max_sg_len);
 	out_sg = nx_build_sg_list(nx_ctx->out_sg, out, SHA256_DIGEST_SIZE,
-				  nx_ctx->ap->sglen);
+				  max_sg_len);
 	nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
 	nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
 
diff --git a/drivers/crypto/nx/nx-sha512.c b/drivers/crypto/nx/nx-sha512.c
index 08eee11..2d6d913 100644
--- a/drivers/crypto/nx/nx-sha512.c
+++ b/drivers/crypto/nx/nx-sha512.c
@@ -55,72 +55,88 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data,
 	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
 	struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
 	struct nx_sg *in_sg;
-	u64 to_process, leftover, spbc_bits;
+	u64 to_process, leftover, total, spbc_bits;
+	u32 max_sg_len;
 	int rc = 0;
 
-	if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
-		/* we've hit the nx chip previously and we're updating again,
-		 * so copy over the partial digest */
-		memcpy(csbcpb->cpb.sha512.input_partial_digest,
-		       csbcpb->cpb.sha512.message_digest, SHA512_DIGEST_SIZE);
-	}
-
 	/* 2 cases for total data len:
-	 *  1: <= SHA512_BLOCK_SIZE: copy into state, return 0
-	 *  2: > SHA512_BLOCK_SIZE: process X blocks, copy in leftover
+	 *  1: < SHA512_BLOCK_SIZE: copy into state, return 0
+	 *  2: >= SHA512_BLOCK_SIZE: process X blocks, copy in leftover
 	 */
-	if ((u64)len + sctx->count[0] < SHA512_BLOCK_SIZE) {
+	total = sctx->count[0] + len;
+	if (total < SHA512_BLOCK_SIZE) {
 		memcpy(sctx->buf + sctx->count[0], data, len);
 		sctx->count[0] += len;
 		goto out;
 	}
 
-	/* to_process: the SHA512_BLOCK_SIZE data chunk to process in this
-	 * update */
-	to_process = (sctx->count[0] + len) & ~(SHA512_BLOCK_SIZE - 1);
-	leftover = (sctx->count[0] + len) & (SHA512_BLOCK_SIZE - 1);
+	in_sg = nx_ctx->in_sg;
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
-	if (sctx->count[0]) {
-		in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buf,
-					 sctx->count[0], nx_ctx->ap->sglen);
-		in_sg = nx_build_sg_list(in_sg, (u8 *)data,
+	do {
+		/*
+		 * to_process: the SHA512_BLOCK_SIZE data chunk to process in
+		 * this update. This value is also restricted by the sg list
+		 * limits.
+		 */
+		to_process = min_t(u64, total, nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+		to_process = to_process & ~(SHA512_BLOCK_SIZE - 1);
+		leftover = total - to_process;
+
+		if (sctx->count[0]) {
+			in_sg = nx_build_sg_list(nx_ctx->in_sg,
+						 (u8 *) sctx->buf,
+						 sctx->count[0], max_sg_len);
+		}
+		in_sg = nx_build_sg_list(in_sg, (u8 *) data,
 					 to_process - sctx->count[0],
-					 nx_ctx->ap->sglen);
+					 max_sg_len);
 		nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
 					sizeof(struct nx_sg);
-	} else {
-		in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)data,
-					 to_process, nx_ctx->ap->sglen);
-		nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
-					sizeof(struct nx_sg);
-	}
 
-	NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
+			/*
+			 * we've hit the nx chip previously and we're updating
+			 * again, so copy over the partial digest.
+			 */
+			memcpy(csbcpb->cpb.sha512.input_partial_digest,
+			       csbcpb->cpb.sha512.message_digest,
+			       SHA512_DIGEST_SIZE);
+		}
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
+			rc = -EINVAL;
+			goto out;
+		}
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
 
-	if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
-		rc = -EINVAL;
-		goto out;
-	}
+		atomic_inc(&(nx_ctx->stats->sha512_ops));
+		spbc_bits = csbcpb->cpb.sha512.spbc * 8;
+		csbcpb->cpb.sha512.message_bit_length_lo += spbc_bits;
+		if (csbcpb->cpb.sha512.message_bit_length_lo < spbc_bits)
+			csbcpb->cpb.sha512.message_bit_length_hi++;
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		/* everything after the first update is continuation */
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 
-	atomic_inc(&(nx_ctx->stats->sha512_ops));
+		total -= to_process;
+		data += to_process;
+		sctx->count[0] = 0;
+		in_sg = nx_ctx->in_sg;
+	} while (leftover >= SHA512_BLOCK_SIZE);
 
 	/* copy the leftover back into the state struct */
 	if (leftover)
-		memcpy(sctx->buf, data + len - leftover, leftover);
+		memcpy(sctx->buf, data, leftover);
 	sctx->count[0] = leftover;
-
-	spbc_bits = csbcpb->cpb.sha512.spbc * 8;
-	csbcpb->cpb.sha512.message_bit_length_lo += spbc_bits;
-	if (csbcpb->cpb.sha512.message_bit_length_lo < spbc_bits)
-		csbcpb->cpb.sha512.message_bit_length_hi++;
-
-	/* everything after the first update is continuation */
-	NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 out:
 	return rc;
 }
@@ -131,9 +147,12 @@ static int nx_sha512_final(struct shash_desc *desc, u8 *out)
 	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
 	struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
 	struct nx_sg *in_sg, *out_sg;
+	u32 max_sg_len;
 	u64 count0;
 	int rc;
 
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len, nx_ctx->ap->sglen);
+
 	if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
 		/* we've hit the nx chip previously, now we're finalizing,
 		 * so copy over the partial digest */
@@ -152,9 +171,9 @@ static int nx_sha512_final(struct shash_desc *desc, u8 *out)
 		csbcpb->cpb.sha512.message_bit_length_hi++;
 
 	in_sg = nx_build_sg_list(nx_ctx->in_sg, sctx->buf, sctx->count[0],
-				 nx_ctx->ap->sglen);
+				 max_sg_len);
 	out_sg = nx_build_sg_list(nx_ctx->out_sg, out, SHA512_DIGEST_SIZE,
-				  nx_ctx->ap->sglen);
+				  max_sg_len);
 	nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
 	nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
 
-- 
1.7.12


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2
  2013-07-26 17:08 ` [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2 Marcelo Cerri
@ 2013-07-26 22:29   ` Benjamin Herrenschmidt
  2013-07-29 15:24     ` Marcelo Cerri
  2013-07-26 22:31   ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 8+ messages in thread
From: Benjamin Herrenschmidt @ 2013-07-26 22:29 UTC (permalink / raw)
  To: Marcelo Cerri
  Cc: linux-kernel, linux-crypto, Fionnuala Gunter, Joel Schopp,
	Joy Latten

On Fri, 2013-07-26 at 14:08 -0300, Marcelo Cerri wrote:
> 
> Signed-off-by: Fionnuala Gunter <fin@linux.vnet.ibm.com>
> Signed-off-by: Joel Schopp <jschopp@linux.vnet.ibm.com>
> Signed-off-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
> Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
> ---

Why that enormous S-O-B list ? Did every of these people actually carry
the patch ? If it's just acks or reviews, please use the corresponding
Acked-by or Reviewed-by.

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2
  2013-07-26 17:08 ` [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2 Marcelo Cerri
  2013-07-26 22:29   ` Benjamin Herrenschmidt
@ 2013-07-26 22:31   ` Benjamin Herrenschmidt
  2013-07-29 15:19     ` Marcelo Cerri
  1 sibling, 1 reply; 8+ messages in thread
From: Benjamin Herrenschmidt @ 2013-07-26 22:31 UTC (permalink / raw)
  To: Marcelo Cerri
  Cc: linux-kernel, linux-crypto, Fionnuala Gunter, Joel Schopp,
	Joy Latten

On Fri, 2013-07-26 at 14:08 -0300, Marcelo Cerri wrote:
> ---
>  drivers/crypto/nx/nx-sha256.c | 108 +++++++++++++++++++++++-----------------
>  drivers/crypto/nx/nx-sha512.c | 113 ++++++++++++++++++++++++------------------
>  2 files changed, 129 insertions(+), 92 deletions(-)

What about the other nx drivers ? They are not affected ?

Cheers,
Ben.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2
  2013-07-26 22:31   ` Benjamin Herrenschmidt
@ 2013-07-29 15:19     ` Marcelo Cerri
  0 siblings, 0 replies; 8+ messages in thread
From: Marcelo Cerri @ 2013-07-29 15:19 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: linux-kernel, linux-crypto, Fionnuala Gunter, Joel Schopp,
	Joy Latten

We think that it's very likely that AES may also be affected by a
similar problem. But we still have to test it and I'd like to provide a
separated patch for it.

Regards,
Marcelo

On Sat, Jul 27, 2013 at 08:31:32AM +1000, Benjamin Herrenschmidt wrote:
> On Fri, 2013-07-26 at 14:08 -0300, Marcelo Cerri wrote:
> > ---
> >  drivers/crypto/nx/nx-sha256.c | 108 +++++++++++++++++++++++-----------------
> >  drivers/crypto/nx/nx-sha512.c | 113 ++++++++++++++++++++++++------------------
> >  2 files changed, 129 insertions(+), 92 deletions(-)
> 
> What about the other nx drivers ? They are not affected ?
> 
> Cheers,
> Ben.
> 
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2
  2013-07-26 22:29   ` Benjamin Herrenschmidt
@ 2013-07-29 15:24     ` Marcelo Cerri
  0 siblings, 0 replies; 8+ messages in thread
From: Marcelo Cerri @ 2013-07-29 15:24 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: linux-kernel, linux-crypto, Fionnuala Gunter, Joel Schopp,
	Joy Latten

Hi Ben,

Everyone in S-O-B list has participated to solve this bug with code
and/or ideas of how to fix it, as well as reviewing and testing the
final version of the patches.

I'd like to keep it as it is if you don't mind.

Regards.
Marcelo

On Sat, Jul 27, 2013 at 08:29:59AM +1000, Benjamin Herrenschmidt wrote:
> On Fri, 2013-07-26 at 14:08 -0300, Marcelo Cerri wrote:
> > 
> > Signed-off-by: Fionnuala Gunter <fin@linux.vnet.ibm.com>
> > Signed-off-by: Joel Schopp <jschopp@linux.vnet.ibm.com>
> > Signed-off-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
> > Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
> > ---
> 
> Why that enormous S-O-B list ? Did every of these people actually carry
> the patch ? If it's just acks or reviews, please use the corresponding
> Acked-by or Reviewed-by.
> 
> Cheers,
> Ben.
> 
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large
  2013-07-26 17:08 [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Marcelo Cerri
  2013-07-26 17:08 ` [PATCH 1/2] drivers/crypto/nx: fix physical addresses added to sg lists Marcelo Cerri
  2013-07-26 17:08 ` [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2 Marcelo Cerri
@ 2013-08-01  9:26 ` Benjamin Herrenschmidt
  2 siblings, 0 replies; 8+ messages in thread
From: Benjamin Herrenschmidt @ 2013-08-01  9:26 UTC (permalink / raw)
  To: Marcelo Cerri; +Cc: linux-kernel, linux-crypto

On Fri, 2013-07-26 at 14:08 -0300, Marcelo Cerri wrote:
> This series of patches fixes two bugs that are triggered when the input data is
> too large. The first one is caused by the miscalculation of physical addresses
> and the second one by some limits that the co-processor has to the input data.

BTW. Are these supposed to go upstream via my tree or via crypto ?

They are not part of my latest pull request to Linus because they were
not CC'ed to linuxppc-dev so I didn't see them while collecting patches
from patchwork.

If you intend to have them go via the crypto tree that's fine, but if
you intend to have them go via powerpc, then please resend with the
correct mailing list on CC.

Cheers,
Ben.

> Marcelo Cerri (2):
>   drivers/crypto/nx: fix physical addresses added to sg lists
>   drivers/crypto/nx: fix limits to sg lists for SHA-2
> 
>  drivers/crypto/nx/nx-sha256.c | 108 +++++++++++++++++++++++-----------------
>  drivers/crypto/nx/nx-sha512.c | 113 ++++++++++++++++++++++++------------------
>  drivers/crypto/nx/nx.c        |  22 ++++++--
>  3 files changed, 148 insertions(+), 95 deletions(-)
> 



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-08-01  9:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-26 17:08 [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Marcelo Cerri
2013-07-26 17:08 ` [PATCH 1/2] drivers/crypto/nx: fix physical addresses added to sg lists Marcelo Cerri
2013-07-26 17:08 ` [PATCH 2/2] drivers/crypto/nx: fix limits to sg lists for SHA-2 Marcelo Cerri
2013-07-26 22:29   ` Benjamin Herrenschmidt
2013-07-29 15:24     ` Marcelo Cerri
2013-07-26 22:31   ` Benjamin Herrenschmidt
2013-07-29 15:19     ` Marcelo Cerri
2013-08-01  9:26 ` [PATCH 0/2] drivers/crypto/nx: fixes when input data is too large Benjamin Herrenschmidt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).