public inbox for linux-crypto@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] crypto: Use nth_page instead of doing it by hand
@ 2025-03-11 10:20 Herbert Xu
  2025-03-11 10:20 ` [PATCH 1/3] crypto: scatterwalk - " Herbert Xu
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Herbert Xu @ 2025-03-11 10:20 UTC (permalink / raw)
  To: Linux Crypto Mailing List

This patch series is based on top of:

https://patchwork.kernel.org/project/linux-crypto/patch/20250310172016.153423-1-ebiggers@kernel.org/

Curiously, the Crypto API scatterwalk incremented pages by hand
rather than using nth_page.  Possibly because scatterwalk predates
nth_page (the following commit is from the history tree):
    
        commit 3957f2b34960d85b63e814262a8be7d5ad91444d
        Author: James Morris <jmorris@intercode.com.au>
        Date:   Sun Feb 2 07:35:32 2003 -0800

            [CRYPTO]: in/out scatterlist support for ciphers.

Fix this by using nth_page.

Herbert Xu (3):
  crypto: scatterwalk - Use nth_page instead of doing it by hand
  crypto: hash - Use nth_page instead of doing it by hand
  crypto: krb5 - Use SG miter instead of doing it by hand

 crypto/ahash.c                   | 38 +++++++++++++++++++++-----------
 crypto/krb5/rfc3961_simplified.c | 34 +++++++++++++---------------
 include/crypto/scatterwalk.h     | 21 +++++++++---------
 3 files changed, 51 insertions(+), 42 deletions(-)

-- 
2.39.5


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] crypto: scatterwalk - Use nth_page instead of doing it by hand
  2025-03-11 10:20 [PATCH 0/3] crypto: Use nth_page instead of doing it by hand Herbert Xu
@ 2025-03-11 10:20 ` Herbert Xu
  2025-03-11 17:26   ` Eric Biggers
  2025-03-11 10:20 ` [PATCH 2/3] crypto: hash " Herbert Xu
  2025-03-11 10:20 ` [PATCH 3/3] crypto: krb5 - Use SG miter " Herbert Xu
  2 siblings, 1 reply; 10+ messages in thread
From: Herbert Xu @ 2025-03-11 10:20 UTC (permalink / raw)
  To: Linux Crypto Mailing List

Curiously, the Crypto API scatterwalk incremented pages by hand
rather than using nth_page.  Possibly because scatterwalk predates
nth_page (the following commit is from the history tree):

	commit 3957f2b34960d85b63e814262a8be7d5ad91444d
	Author: James Morris <jmorris@intercode.com.au>
	Date:   Sun Feb 2 07:35:32 2003 -0800

	    [CRYPTO]: in/out scatterlist support for ciphers.

Fix this by using nth_page.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 include/crypto/scatterwalk.h | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index b7e617ae4442..4fc70b8422c5 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -99,26 +99,27 @@ static inline void scatterwalk_get_sglist(struct scatter_walk *walk,
 
 static inline void scatterwalk_map(struct scatter_walk *walk)
 {
-	struct page *base_page = sg_page(walk->sg);
+	struct page *page = sg_page(walk->sg);
+	unsigned int offset = walk->offset;
+	void *addr;
+
+	page = nth_page(page, offset >> PAGE_SHIFT);
+	offset = offset_in_page(offset);
 
 	if (IS_ENABLED(CONFIG_HIGHMEM)) {
-		walk->__addr = kmap_local_page(base_page +
-					       (walk->offset >> PAGE_SHIFT)) +
-			       offset_in_page(walk->offset);
+		addr = kmap_local_page(page) + offset;
 	} else {
 		/*
 		 * When !HIGHMEM we allow the walker to return segments that
 		 * span a page boundary; see scatterwalk_clamp().  To make it
 		 * clear that in this case we're working in the linear buffer of
 		 * the whole sg entry in the kernel's direct map rather than
-		 * within the mapped buffer of a single page, compute the
-		 * address as an offset from the page_address() of the first
-		 * page of the sg entry.  Either way the result is the address
-		 * in the direct map, but this makes it clearer what is really
-		 * going on.
+		 * within the mapped buffer of a single page, use
+		 * page_address() instead of going through kmap.
 		 */
-		walk->__addr = page_address(base_page) + walk->offset;
+		addr = page_address(page) + offset;
 	}
+	walk->__addr = addr;
 }
 
 /**
-- 
2.39.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] crypto: hash - Use nth_page instead of doing it by hand
  2025-03-11 10:20 [PATCH 0/3] crypto: Use nth_page instead of doing it by hand Herbert Xu
  2025-03-11 10:20 ` [PATCH 1/3] crypto: scatterwalk - " Herbert Xu
@ 2025-03-11 10:20 ` Herbert Xu
  2025-03-11 17:44   ` Eric Biggers
  2025-03-11 10:20 ` [PATCH 3/3] crypto: krb5 - Use SG miter " Herbert Xu
  2 siblings, 1 reply; 10+ messages in thread
From: Herbert Xu @ 2025-03-11 10:20 UTC (permalink / raw)
  To: Linux Crypto Mailing List

Use nth_page instead of adding n to the page pointer.

This also fixes a real bug in shash_ahash_digest where the the
check for continguous hash data may be incorrect in the presence
of highmem.  This could result in an incorrect hash or worse.

Fixes: 5f7082ed4f48 ("crypto: hash - Export shash through hash")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/ahash.c | 38 +++++++++++++++++++++++++-------------
 1 file changed, 25 insertions(+), 13 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 9c26175c21a8..75d642897e36 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -16,6 +16,7 @@
 #include <linux/cryptouser.h>
 #include <linux/err.h>
 #include <linux/kernel.h>
+#include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -79,7 +80,7 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
 
 	sg = walk->sg;
 	walk->offset = sg->offset;
-	walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
+	walk->pg = nth_page(sg_page(walk->sg), walk->offset >> PAGE_SHIFT);
 	walk->offset = offset_in_page(walk->offset);
 	walk->entrylen = sg->length;
 
@@ -201,25 +202,36 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
 	unsigned int nbytes = req->nbytes;
 	struct scatterlist *sg;
 	unsigned int offset;
+	struct page *page;
+	void *data;
 	int err;
 
-	if (ahash_request_isvirt(req))
+	if (!nbytes || ahash_request_isvirt(req))
 		return crypto_shash_digest(desc, req->svirt, nbytes,
 					   req->result);
 
-	if (nbytes &&
-	    (sg = req->src, offset = sg->offset,
-	     nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
-		void *data;
+	sg = req->src;
+	if (nbytes > sg->length)
+		return crypto_shash_init(desc) ?:
+		       shash_ahash_finup(req, desc);
 
-		data = kmap_local_page(sg_page(sg));
-		err = crypto_shash_digest(desc, data + offset, nbytes,
-					  req->result);
-		kunmap_local(data);
-	} else
-		err = crypto_shash_init(desc) ?:
-		      shash_ahash_finup(req, desc);
+	page = sg_page(sg);
+	offset = sg->offset;
+	page = nth_page(page, offset >> PAGE_SHIFT);
+	offset = offset_in_page(offset);
 
+	if (!IS_ENABLED(CONFIG_HIGHMEM))
+		return crypto_shash_digest(desc, page_address(page) + offset,
+					   nbytes, req->result);
+
+	if (nbytes > (unsigned int)PAGE_SIZE - offset)
+		return crypto_shash_init(desc) ?:
+		       shash_ahash_finup(req, desc);
+
+	data = kmap_local_page(page);
+	err = crypto_shash_digest(desc, data + offset, nbytes,
+				  req->result);
+	kunmap_local(data);
 	return err;
 }
 EXPORT_SYMBOL_GPL(shash_ahash_digest);
-- 
2.39.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] crypto: krb5 - Use SG miter instead of doing it by hand
  2025-03-11 10:20 [PATCH 0/3] crypto: Use nth_page instead of doing it by hand Herbert Xu
  2025-03-11 10:20 ` [PATCH 1/3] crypto: scatterwalk - " Herbert Xu
  2025-03-11 10:20 ` [PATCH 2/3] crypto: hash " Herbert Xu
@ 2025-03-11 10:20 ` Herbert Xu
  2 siblings, 0 replies; 10+ messages in thread
From: Herbert Xu @ 2025-03-11 10:20 UTC (permalink / raw)
  To: Linux Crypto Mailing List

The function crypto_shash_update_sg iterates through an SG by
hand.  It fails to handle corner cases such as SG entries longer
than a page.  Fix this by using the SG iterator.

Fixes: 348f5669d1f6 ("crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_mic")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/krb5/rfc3961_simplified.c | 34 ++++++++++++++------------------
 1 file changed, 15 insertions(+), 19 deletions(-)

diff --git a/crypto/krb5/rfc3961_simplified.c b/crypto/krb5/rfc3961_simplified.c
index c1dcb0dd3a00..d9cf1bfa11a5 100644
--- a/crypto/krb5/rfc3961_simplified.c
+++ b/crypto/krb5/rfc3961_simplified.c
@@ -67,6 +67,7 @@
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
 
 #include <linux/random.h>
+#include <linux/scatterlist.h>
 #include <linux/skbuff.h>
 #include <linux/slab.h>
 #include <linux/highmem.h>
@@ -83,26 +84,21 @@
 int crypto_shash_update_sg(struct shash_desc *desc, struct scatterlist *sg,
 			   size_t offset, size_t len)
 {
-	do {
-		int ret;
+	struct sg_mapping_iter miter;
+	size_t i, n;
+	int ret;
 
-		if (offset < sg->length) {
-			struct page *page = sg_page(sg);
-			void *p = kmap_local_page(page);
-			void *q = p + sg->offset + offset;
-			size_t seg = min_t(size_t, len, sg->length - offset);
-
-			ret = crypto_shash_update(desc, q, seg);
-			kunmap_local(p);
-			if (ret < 0)
-				return ret;
-			len -= seg;
-			offset = 0;
-		} else {
-			offset -= sg->length;
-		}
-	} while (len > 0 && (sg = sg_next(sg)));
-	return 0;
+	sg_miter_start(&miter, sg, sg_nents(sg),
+		       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+	for (i = 0; i < len; i += n) {
+		sg_miter_next(&miter);
+		n = min(miter.length, len - i);
+		ret = crypto_shash_update(desc, miter.addr, n);
+		if (ret < 0)
+			break;
+	}
+	sg_miter_stop(&miter);
+	return ret;
 }
 
 static int rfc3961_do_encrypt(struct crypto_sync_skcipher *tfm, void *iv,
-- 
2.39.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] crypto: scatterwalk - Use nth_page instead of doing it by hand
  2025-03-11 10:20 ` [PATCH 1/3] crypto: scatterwalk - " Herbert Xu
@ 2025-03-11 17:26   ` Eric Biggers
  2025-03-12  2:23     ` Herbert Xu
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Biggers @ 2025-03-11 17:26 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Tue, Mar 11, 2025 at 06:20:29PM +0800, Herbert Xu wrote:
>  static inline void scatterwalk_map(struct scatter_walk *walk)
>  {
> -	struct page *base_page = sg_page(walk->sg);
> +	struct page *page = sg_page(walk->sg);
> +	unsigned int offset = walk->offset;
> +	void *addr;
> +
> +	page = nth_page(page, offset >> PAGE_SHIFT);
> +	offset = offset_in_page(offset);
>  
>  	if (IS_ENABLED(CONFIG_HIGHMEM)) {
> -		walk->__addr = kmap_local_page(base_page +
> -					       (walk->offset >> PAGE_SHIFT)) +
> -			       offset_in_page(walk->offset);
> +		addr = kmap_local_page(page) + offset;
>  	} else {
>  		/*
>  		 * When !HIGHMEM we allow the walker to return segments that
>  		 * span a page boundary; see scatterwalk_clamp().  To make it
>  		 * clear that in this case we're working in the linear buffer of
>  		 * the whole sg entry in the kernel's direct map rather than
> -		 * within the mapped buffer of a single page, compute the
> -		 * address as an offset from the page_address() of the first
> -		 * page of the sg entry.  Either way the result is the address
> -		 * in the direct map, but this makes it clearer what is really
> -		 * going on.
> +		 * within the mapped buffer of a single page, use
> +		 * page_address() instead of going through kmap.
>  		 */
> -		walk->__addr = page_address(base_page) + walk->offset;
> +		addr = page_address(page) + offset;
>  	}
> +	walk->__addr = addr;

In the !HIGHMEM case (i.e., the common case) this is just worse, though.  It
expands into more instructions than before, only to get the same linear address
that it did before.  You also seem to be ignoring the comment that explains that
we're working in the linear buffer of the whole sg entry.

- Eric

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] crypto: hash - Use nth_page instead of doing it by hand
  2025-03-11 10:20 ` [PATCH 2/3] crypto: hash " Herbert Xu
@ 2025-03-11 17:44   ` Eric Biggers
  2025-03-11 18:05     ` Eric Biggers
  2025-03-12  2:30     ` Herbert Xu
  0 siblings, 2 replies; 10+ messages in thread
From: Eric Biggers @ 2025-03-11 17:44 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Tue, Mar 11, 2025 at 06:20:31PM +0800, Herbert Xu wrote:
> Use nth_page instead of adding n to the page pointer.
> 
> This also fixes a real bug in shash_ahash_digest where the the
> check for continguous hash data may be incorrect in the presence
> of highmem.  This could result in an incorrect hash or worse.
> 
> Fixes: 5f7082ed4f48 ("crypto: hash - Export shash through hash")
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> ---
>  crypto/ahash.c | 38 +++++++++++++++++++++++++-------------
>  1 file changed, 25 insertions(+), 13 deletions(-)
> 
> diff --git a/crypto/ahash.c b/crypto/ahash.c
> index 9c26175c21a8..75d642897e36 100644
> --- a/crypto/ahash.c
> +++ b/crypto/ahash.c
> @@ -16,6 +16,7 @@
>  #include <linux/cryptouser.h>
>  #include <linux/err.h>
>  #include <linux/kernel.h>
> +#include <linux/mm.h>
>  #include <linux/module.h>
>  #include <linux/sched.h>
>  #include <linux/slab.h>
> @@ -79,7 +80,7 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
>  
>  	sg = walk->sg;
>  	walk->offset = sg->offset;
> -	walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
> +	walk->pg = nth_page(sg_page(walk->sg), walk->offset >> PAGE_SHIFT);
>  	walk->offset = offset_in_page(walk->offset);
>  	walk->entrylen = sg->length;
>  
> @@ -201,25 +202,36 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
>  	unsigned int nbytes = req->nbytes;
>  	struct scatterlist *sg;
>  	unsigned int offset;
> +	struct page *page;
> +	void *data;
>  	int err;
>  
> -	if (ahash_request_isvirt(req))
> +	if (!nbytes || ahash_request_isvirt(req))
>  		return crypto_shash_digest(desc, req->svirt, nbytes,
>  					   req->result);
>  
> -	if (nbytes &&
> -	    (sg = req->src, offset = sg->offset,
> -	     nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
> -		void *data;
> +	sg = req->src;
> +	if (nbytes > sg->length)
> +		return crypto_shash_init(desc) ?:
> +		       shash_ahash_finup(req, desc);
>  
> -		data = kmap_local_page(sg_page(sg));
> -		err = crypto_shash_digest(desc, data + offset, nbytes,
> -					  req->result);
> -		kunmap_local(data);
> -	} else
> -		err = crypto_shash_init(desc) ?:
> -		      shash_ahash_finup(req, desc);
> +	page = sg_page(sg);
> +	offset = sg->offset;
> +	page = nth_page(page, offset >> PAGE_SHIFT);
> +	offset = offset_in_page(offset);
>  
> +	if (!IS_ENABLED(CONFIG_HIGHMEM))
> +		return crypto_shash_digest(desc, page_address(page) + offset,
> +					   nbytes, req->result);
> +
> +	if (nbytes > (unsigned int)PAGE_SIZE - offset)
> +		return crypto_shash_init(desc) ?:
> +		       shash_ahash_finup(req, desc);
> +
> +	data = kmap_local_page(page);
> +	err = crypto_shash_digest(desc, data + offset, nbytes,
> +				  req->result);
> +	kunmap_local(data);
>  	return err;

I guess you think this is fixing a bug in the case where sg->offset > PAGE_SIZE?
Is that case even supported?  It is supposed to be the offset into a page.

Even if so, a simpler fix (1 line) would be to use:
'sg->length >= nbytes && sg->offset + nbytes <= PAGE_SIZE'

- Eric

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] crypto: hash - Use nth_page instead of doing it by hand
  2025-03-11 17:44   ` Eric Biggers
@ 2025-03-11 18:05     ` Eric Biggers
  2025-03-12  2:30     ` Herbert Xu
  1 sibling, 0 replies; 10+ messages in thread
From: Eric Biggers @ 2025-03-11 18:05 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Tue, Mar 11, 2025 at 10:44:31AM -0700, Eric Biggers wrote:
> On Tue, Mar 11, 2025 at 06:20:31PM +0800, Herbert Xu wrote:
> > Use nth_page instead of adding n to the page pointer.
> > 
> > This also fixes a real bug in shash_ahash_digest where the the
> > check for continguous hash data may be incorrect in the presence
> > of highmem.  This could result in an incorrect hash or worse.
> > 
> > Fixes: 5f7082ed4f48 ("crypto: hash - Export shash through hash")
> > Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> > ---
> >  crypto/ahash.c | 38 +++++++++++++++++++++++++-------------
> >  1 file changed, 25 insertions(+), 13 deletions(-)
> > 
> > diff --git a/crypto/ahash.c b/crypto/ahash.c
> > index 9c26175c21a8..75d642897e36 100644
> > --- a/crypto/ahash.c
> > +++ b/crypto/ahash.c
> > @@ -16,6 +16,7 @@
> >  #include <linux/cryptouser.h>
> >  #include <linux/err.h>
> >  #include <linux/kernel.h>
> > +#include <linux/mm.h>
> >  #include <linux/module.h>
> >  #include <linux/sched.h>
> >  #include <linux/slab.h>
> > @@ -79,7 +80,7 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
> >  
> >  	sg = walk->sg;
> >  	walk->offset = sg->offset;
> > -	walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
> > +	walk->pg = nth_page(sg_page(walk->sg), walk->offset >> PAGE_SHIFT);
> >  	walk->offset = offset_in_page(walk->offset);
> >  	walk->entrylen = sg->length;
> >  
> > @@ -201,25 +202,36 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
> >  	unsigned int nbytes = req->nbytes;
> >  	struct scatterlist *sg;
> >  	unsigned int offset;
> > +	struct page *page;
> > +	void *data;
> >  	int err;
> >  
> > -	if (ahash_request_isvirt(req))
> > +	if (!nbytes || ahash_request_isvirt(req))
> >  		return crypto_shash_digest(desc, req->svirt, nbytes,
> >  					   req->result);
> >  
> > -	if (nbytes &&
> > -	    (sg = req->src, offset = sg->offset,
> > -	     nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
> > -		void *data;
> > +	sg = req->src;
> > +	if (nbytes > sg->length)
> > +		return crypto_shash_init(desc) ?:
> > +		       shash_ahash_finup(req, desc);
> >  
> > -		data = kmap_local_page(sg_page(sg));
> > -		err = crypto_shash_digest(desc, data + offset, nbytes,
> > -					  req->result);
> > -		kunmap_local(data);
> > -	} else
> > -		err = crypto_shash_init(desc) ?:
> > -		      shash_ahash_finup(req, desc);
> > +	page = sg_page(sg);
> > +	offset = sg->offset;
> > +	page = nth_page(page, offset >> PAGE_SHIFT);
> > +	offset = offset_in_page(offset);
> >  
> > +	if (!IS_ENABLED(CONFIG_HIGHMEM))
> > +		return crypto_shash_digest(desc, page_address(page) + offset,
> > +					   nbytes, req->result);
> > +
> > +	if (nbytes > (unsigned int)PAGE_SIZE - offset)
> > +		return crypto_shash_init(desc) ?:
> > +		       shash_ahash_finup(req, desc);
> > +
> > +	data = kmap_local_page(page);
> > +	err = crypto_shash_digest(desc, data + offset, nbytes,
> > +				  req->result);
> > +	kunmap_local(data);
> >  	return err;
> 
> I guess you think this is fixing a bug in the case where sg->offset > PAGE_SIZE?
> Is that case even supported?  It is supposed to be the offset into a page.
> 
> Even if so, a simpler fix (1 line) would be to use:
> 'sg->length >= nbytes && sg->offset + nbytes <= PAGE_SIZE'

Or just make this optimization specific to !HIGHMEM, and use
nbytes <= sg->length and sg_virt().

- Eric

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] crypto: scatterwalk - Use nth_page instead of doing it by hand
  2025-03-11 17:26   ` Eric Biggers
@ 2025-03-12  2:23     ` Herbert Xu
  0 siblings, 0 replies; 10+ messages in thread
From: Herbert Xu @ 2025-03-12  2:23 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List

On Tue, Mar 11, 2025 at 10:26:24AM -0700, Eric Biggers wrote:
>
> In the !HIGHMEM case (i.e., the common case) this is just worse, though.  It
> expands into more instructions than before, only to get the same linear address
> that it did before.  You also seem to be ignoring the comment that explains that
> we're working in the linear buffer of the whole sg entry.

Thanks for letting me know.  I had assumed that this would all get
optimised away.  Let me look into this atrocious code generation.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] crypto: hash - Use nth_page instead of doing it by hand
  2025-03-11 17:44   ` Eric Biggers
  2025-03-11 18:05     ` Eric Biggers
@ 2025-03-12  2:30     ` Herbert Xu
  2025-03-12  2:49       ` Eric Biggers
  1 sibling, 1 reply; 10+ messages in thread
From: Herbert Xu @ 2025-03-12  2:30 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List

On Tue, Mar 11, 2025 at 10:44:31AM -0700, Eric Biggers wrote:
>
> I guess you think this is fixing a bug in the case where sg->offset > PAGE_SIZE?
> Is that case even supported?  It is supposed to be the offset into a page.

Supported? Obviously not since this bug has been there since the
very start :)

But is it conceivable to create such a scatterlist, it certainly
seems to be.  If we were to create a scatterlist from a single
order-n folio, then it's quite reasonable for the offset to be
greater than PAGE_SIZE.

> Even if so, a simpler fix (1 line) would be to use:
> 'sg->length >= nbytes && sg->offset + nbytes <= PAGE_SIZE'

That would just mean more use of the fallback code path.  Not
a big deal but I was feeling generous.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] crypto: hash - Use nth_page instead of doing it by hand
  2025-03-12  2:30     ` Herbert Xu
@ 2025-03-12  2:49       ` Eric Biggers
  0 siblings, 0 replies; 10+ messages in thread
From: Eric Biggers @ 2025-03-12  2:49 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Wed, Mar 12, 2025 at 10:30:06AM +0800, Herbert Xu wrote:
> On Tue, Mar 11, 2025 at 10:44:31AM -0700, Eric Biggers wrote:
> >
> > I guess you think this is fixing a bug in the case where sg->offset > PAGE_SIZE?
> > Is that case even supported?  It is supposed to be the offset into a page.
> 
> Supported? Obviously not since this bug has been there since the
> very start :)
> 
> But is it conceivable to create such a scatterlist, it certainly
> seems to be.  If we were to create a scatterlist from a single
> order-n folio, then it's quite reasonable for the offset to be
> greater than PAGE_SIZE.

If it needs to work, then it needs to be tested.  Currently the self-tests
always use sg_set_buf(), and thus scatterlist::offset always gets set to a value
less than PAGE_SIZE.

And this is yet more evidence that scatterlist based APIs are just a really bad
idea.  Even 20 years later, there is still no precise definition of what a
scatterlist even is...

> > Even if so, a simpler fix (1 line) would be to use:
> > 'sg->length >= nbytes && sg->offset + nbytes <= PAGE_SIZE'
> 
> That would just mean more use of the fallback code path.  Not
> a big deal but I was feeling generous.

My second suggestion, making it conditional on !HIGHMEM and using nbytes <=
sg->length, would also be simple and would reduce the overhead for !HIGHMEM
(rather than increasing it as your patch does, just like patch 1 which had the
same problem).  !HIGHMEM is the common case (by far) these days, of course.

- Eric

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-03-12  2:49 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-11 10:20 [PATCH 0/3] crypto: Use nth_page instead of doing it by hand Herbert Xu
2025-03-11 10:20 ` [PATCH 1/3] crypto: scatterwalk - " Herbert Xu
2025-03-11 17:26   ` Eric Biggers
2025-03-12  2:23     ` Herbert Xu
2025-03-11 10:20 ` [PATCH 2/3] crypto: hash " Herbert Xu
2025-03-11 17:44   ` Eric Biggers
2025-03-11 18:05     ` Eric Biggers
2025-03-12  2:30     ` Herbert Xu
2025-03-12  2:49       ` Eric Biggers
2025-03-11 10:20 ` [PATCH 3/3] crypto: krb5 - Use SG miter " Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox