* [v4 PATCH 0/2] crypto: Use nth_page instead of doing it by hand
@ 2025-03-14 3:27 Herbert Xu
2025-03-14 3:27 ` [v4 PATCH 1/2] crypto: scatterwalk - " Herbert Xu
2025-03-14 3:27 ` [v4 PATCH 2/2] crypto: hash " Herbert Xu
0 siblings, 2 replies; 6+ messages in thread
From: Herbert Xu @ 2025-03-14 3:27 UTC (permalink / raw)
To: Linux Crypto Mailing List; +Cc: Eric Biggers
v4 removes the unrelated changes and restores nth_page.
This patch series is based on top of:
https://patchwork.kernel.org/project/linux-crypto/patch/b4b00e0fed2fe0e48a0d9b2270bed7e29b119f6a.1741842470.git.herbert@gondor.apana.org.au/
Curiously, the Crypto API scatterwalk incremented pages by hand
rather than using nth_page. Possibly because scatterwalk predates
nth_page (the following commit is from the history tree):
commit 3957f2b34960d85b63e814262a8be7d5ad91444d
Author: James Morris <jmorris@intercode.com.au>
Date: Sun Feb 2 07:35:32 2003 -0800
[CRYPTO]: in/out scatterlist support for ciphers.
Fix this by using nth_page.
Herbert Xu (2):
crypto: scatterwalk - Use nth_page instead of doing it by hand
crypto: hash - Use nth_page instead of doing it by hand
crypto/ahash.c | 4 ++--
include/crypto/scatterwalk.h | 30 ++++++++++++++++++++----------
2 files changed, 22 insertions(+), 12 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* [v4 PATCH 1/2] crypto: scatterwalk - Use nth_page instead of doing it by hand
2025-03-14 3:27 [v4 PATCH 0/2] crypto: Use nth_page instead of doing it by hand Herbert Xu
@ 2025-03-14 3:27 ` Herbert Xu
2025-03-16 3:31 ` Eric Biggers
2025-03-14 3:27 ` [v4 PATCH 2/2] crypto: hash " Herbert Xu
1 sibling, 1 reply; 6+ messages in thread
From: Herbert Xu @ 2025-03-14 3:27 UTC (permalink / raw)
To: Linux Crypto Mailing List; +Cc: Eric Biggers
Curiously, the Crypto API scatterwalk incremented pages by hand
rather than using nth_page. Possibly because scatterwalk predates
nth_page (the following commit is from the history tree):
commit 3957f2b34960d85b63e814262a8be7d5ad91444d
Author: James Morris <jmorris@intercode.com.au>
Date: Sun Feb 2 07:35:32 2003 -0800
[CRYPTO]: in/out scatterlist support for ciphers.
Fix this by using nth_page.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
include/crypto/scatterwalk.h | 30 ++++++++++++++++++++----------
1 file changed, 20 insertions(+), 10 deletions(-)
diff --git a/include/crypto/scatterwalk.h b/include/crypto/scatterwalk.h
index b7e617ae4442..94a8585f26b2 100644
--- a/include/crypto/scatterwalk.h
+++ b/include/crypto/scatterwalk.h
@@ -100,11 +100,15 @@ static inline void scatterwalk_get_sglist(struct scatter_walk *walk,
static inline void scatterwalk_map(struct scatter_walk *walk)
{
struct page *base_page = sg_page(walk->sg);
+ unsigned int offset = walk->offset;
+ void *addr;
if (IS_ENABLED(CONFIG_HIGHMEM)) {
- walk->__addr = kmap_local_page(base_page +
- (walk->offset >> PAGE_SHIFT)) +
- offset_in_page(walk->offset);
+ struct page *page;
+
+ page = nth_page(base_page, offset >> PAGE_SHIFT);
+ offset = offset_in_page(offset);
+ addr = kmap_local_page(page) + offset;
} else {
/*
* When !HIGHMEM we allow the walker to return segments that
@@ -117,8 +121,10 @@ static inline void scatterwalk_map(struct scatter_walk *walk)
* in the direct map, but this makes it clearer what is really
* going on.
*/
- walk->__addr = page_address(base_page) + walk->offset;
+ addr = page_address(base_page) + offset;
}
+
+ walk->__addr = addr;
}
/**
@@ -189,14 +195,18 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk,
* reliably optimized out or not.
*/
if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) {
- struct page *base_page, *start_page, *end_page, *page;
+ struct page *base_page;
+ unsigned int offset;
+ int start, end, i;
base_page = sg_page(walk->sg);
- start_page = base_page + (walk->offset >> PAGE_SHIFT);
- end_page = base_page + ((walk->offset + nbytes +
- PAGE_SIZE - 1) >> PAGE_SHIFT);
- for (page = start_page; page < end_page; page++)
- flush_dcache_page(page);
+ offset = walk->offset;
+ start = offset >> PAGE_SHIFT;
+ end = start + (nbytes >> PAGE_SHIFT);
+ end += (offset_in_page(offset) + offset_in_page(nbytes) +
+ PAGE_SIZE - 1) >> PAGE_SHIFT;
+ for (i = start; i < end; i++)
+ flush_dcache_page(nth_page(base_page, i));
}
scatterwalk_advance(walk, nbytes);
}
--
2.39.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [v4 PATCH 2/2] crypto: hash - Use nth_page instead of doing it by hand
2025-03-14 3:27 [v4 PATCH 0/2] crypto: Use nth_page instead of doing it by hand Herbert Xu
2025-03-14 3:27 ` [v4 PATCH 1/2] crypto: scatterwalk - " Herbert Xu
@ 2025-03-14 3:27 ` Herbert Xu
1 sibling, 0 replies; 6+ messages in thread
From: Herbert Xu @ 2025-03-14 3:27 UTC (permalink / raw)
To: Linux Crypto Mailing List; +Cc: Eric Biggers
Use nth_page instead of adding n to the page pointer.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/ahash.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 1fe594880295..06559e5a715b 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -80,7 +80,7 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
sg = walk->sg;
walk->offset = sg->offset;
- walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
+ walk->pg = nth_page(sg_page(walk->sg), (walk->offset >> PAGE_SHIFT));
walk->offset = offset_in_page(walk->offset);
walk->entrylen = sg->length;
@@ -221,7 +221,7 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
if (!IS_ENABLED(CONFIG_HIGHMEM))
return crypto_shash_digest(desc, data, nbytes, req->result);
- page += offset >> PAGE_SHIFT;
+ page = nth_page(page, offset >> PAGE_SHIFT);
offset = offset_in_page(offset);
if (nbytes > (unsigned int)PAGE_SIZE - offset)
--
2.39.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [v4 PATCH 1/2] crypto: scatterwalk - Use nth_page instead of doing it by hand
2025-03-14 3:27 ` [v4 PATCH 1/2] crypto: scatterwalk - " Herbert Xu
@ 2025-03-16 3:31 ` Eric Biggers
2025-03-16 4:28 ` Herbert Xu
0 siblings, 1 reply; 6+ messages in thread
From: Eric Biggers @ 2025-03-16 3:31 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Fri, Mar 14, 2025 at 11:27:20AM +0800, Herbert Xu wrote:
> Curiously, the Crypto API scatterwalk incremented pages by hand
> rather than using nth_page. Possibly because scatterwalk predates
> nth_page (the following commit is from the history tree):
>
> commit 3957f2b34960d85b63e814262a8be7d5ad91444d
> Author: James Morris <jmorris@intercode.com.au>
> Date: Sun Feb 2 07:35:32 2003 -0800
>
> [CRYPTO]: in/out scatterlist support for ciphers.
>
> Fix this by using nth_page.
As I said on v2, this needs an explanation of what it's actually fixing.
> @@ -189,14 +195,18 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk,
> * reliably optimized out or not.
> */
> if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) {
> - struct page *base_page, *start_page, *end_page, *page;
> + struct page *base_page;
> + unsigned int offset;
> + int start, end, i;
>
> base_page = sg_page(walk->sg);
> - start_page = base_page + (walk->offset >> PAGE_SHIFT);
> - end_page = base_page + ((walk->offset + nbytes +
> - PAGE_SIZE - 1) >> PAGE_SHIFT);
> - for (page = start_page; page < end_page; page++)
> - flush_dcache_page(page);
> + offset = walk->offset;
> + start = offset >> PAGE_SHIFT;
> + end = start + (nbytes >> PAGE_SHIFT);
> + end += (offset_in_page(offset) + offset_in_page(nbytes) +
> + PAGE_SIZE - 1) >> PAGE_SHIFT;
The change to how the end page index is calculated is unrelated to the nth_page
fix, and it makes the code slower and harder to understand. My original code
just rounded the new offset up to a page boundary to get the end page index.
- Eric
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [v4 PATCH 1/2] crypto: scatterwalk - Use nth_page instead of doing it by hand
2025-03-16 3:31 ` Eric Biggers
@ 2025-03-16 4:28 ` Herbert Xu
2025-03-16 4:42 ` Eric Biggers
0 siblings, 1 reply; 6+ messages in thread
From: Herbert Xu @ 2025-03-16 4:28 UTC (permalink / raw)
To: Eric Biggers; +Cc: Linux Crypto Mailing List
On Sat, Mar 15, 2025 at 08:31:41PM -0700, Eric Biggers wrote:
>
> > @@ -189,14 +195,18 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk,
> > * reliably optimized out or not.
> > */
> > if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) {
> > - struct page *base_page, *start_page, *end_page, *page;
> > + struct page *base_page;
> > + unsigned int offset;
> > + int start, end, i;
> >
> > base_page = sg_page(walk->sg);
> > - start_page = base_page + (walk->offset >> PAGE_SHIFT);
> > - end_page = base_page + ((walk->offset + nbytes +
> > - PAGE_SIZE - 1) >> PAGE_SHIFT);
> > - for (page = start_page; page < end_page; page++)
> > - flush_dcache_page(page);
> > + offset = walk->offset;
> > + start = offset >> PAGE_SHIFT;
> > + end = start + (nbytes >> PAGE_SHIFT);
> > + end += (offset_in_page(offset) + offset_in_page(nbytes) +
> > + PAGE_SIZE - 1) >> PAGE_SHIFT;
>
> The change to how the end page index is calculated is unrelated to the nth_page
> fix, and it makes the code slower and harder to understand. My original code
> just rounded the new offset up to a page boundary to get the end page index.
The original code is open to overflows in the addition. The new
version is not.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [v4 PATCH 1/2] crypto: scatterwalk - Use nth_page instead of doing it by hand
2025-03-16 4:28 ` Herbert Xu
@ 2025-03-16 4:42 ` Eric Biggers
0 siblings, 0 replies; 6+ messages in thread
From: Eric Biggers @ 2025-03-16 4:42 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Sun, Mar 16, 2025 at 12:28:24PM +0800, Herbert Xu wrote:
> On Sat, Mar 15, 2025 at 08:31:41PM -0700, Eric Biggers wrote:
> >
> > > @@ -189,14 +195,18 @@ static inline void scatterwalk_done_dst(struct scatter_walk *walk,
> > > * reliably optimized out or not.
> > > */
> > > if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE) {
> > > - struct page *base_page, *start_page, *end_page, *page;
> > > + struct page *base_page;
> > > + unsigned int offset;
> > > + int start, end, i;
> > >
> > > base_page = sg_page(walk->sg);
> > > - start_page = base_page + (walk->offset >> PAGE_SHIFT);
> > > - end_page = base_page + ((walk->offset + nbytes +
> > > - PAGE_SIZE - 1) >> PAGE_SHIFT);
> > > - for (page = start_page; page < end_page; page++)
> > > - flush_dcache_page(page);
> > > + offset = walk->offset;
> > > + start = offset >> PAGE_SHIFT;
> > > + end = start + (nbytes >> PAGE_SHIFT);
> > > + end += (offset_in_page(offset) + offset_in_page(nbytes) +
> > > + PAGE_SIZE - 1) >> PAGE_SHIFT;
> >
> > The change to how the end page index is calculated is unrelated to the nth_page
> > fix, and it makes the code slower and harder to understand. My original code
> > just rounded the new offset up to a page boundary to get the end page index.
>
> The original code is open to overflows in the addition. The new
> version is not.
If you think you are fixing a separate issue, you need to say so.
But I also don't think it's worth worrying about lengths so close to UINT_MAX.
No one is using them, they have no testing, and there are likely to be other
overflows like this one too.
- Eric
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-03-16 4:42 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-14 3:27 [v4 PATCH 0/2] crypto: Use nth_page instead of doing it by hand Herbert Xu
2025-03-14 3:27 ` [v4 PATCH 1/2] crypto: scatterwalk - " Herbert Xu
2025-03-16 3:31 ` Eric Biggers
2025-03-16 4:28 ` Herbert Xu
2025-03-16 4:42 ` Eric Biggers
2025-03-14 3:27 ` [v4 PATCH 2/2] crypto: hash " Herbert Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).