From: Eric Biggers <ebiggers@kernel.org>
To: linux-crypto@vger.kernel.org
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [PATCH v2 02/29] crypto: skcipher - remove unnecessary page alignment of bounce buffer
Date: Sun, 29 Dec 2024 16:13:51 -0800 [thread overview]
Message-ID: <20241230001418.74739-3-ebiggers@kernel.org> (raw)
In-Reply-To: <20241230001418.74739-1-ebiggers@kernel.org>
From: Eric Biggers <ebiggers@google.com>
In the slow path of skcipher_walk where it uses a slab bounce buffer for
the data and/or IV, do not bother to avoid crossing a page boundary in
the part(s) of this buffer that are used, and do not bother to allocate
extra space in the buffer for that purpose. The buffer is accessed only
by virtual address, so pages are irrelevant for it.
This logic may have been present due to the physical address support in
skcipher_walk, but that has now been removed. Or it may have been
present to be consistent with the fast path that currently does not hand
back addresses that span pages, but that behavior is a side effect of
the pages being "mapped" one by one and is not actually a requirement.
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/skcipher.c | 62 ++++++++++++-----------------------------------
1 file changed, 15 insertions(+), 47 deletions(-)
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 8749c44f98a2..887cbce8f78d 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -61,32 +61,20 @@ static inline void skcipher_unmap_dst(struct skcipher_walk *walk)
static inline gfp_t skcipher_walk_gfp(struct skcipher_walk *walk)
{
return walk->flags & SKCIPHER_WALK_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
}
-/* Get a spot of the specified length that does not straddle a page.
- * The caller needs to ensure that there is enough space for this operation.
- */
-static inline u8 *skcipher_get_spot(u8 *start, unsigned int len)
-{
- u8 *end_page = (u8 *)(((unsigned long)(start + len - 1)) & PAGE_MASK);
-
- return max(start, end_page);
-}
-
static inline struct skcipher_alg *__crypto_skcipher_alg(
struct crypto_alg *alg)
{
return container_of(alg, struct skcipher_alg, base);
}
static int skcipher_done_slow(struct skcipher_walk *walk, unsigned int bsize)
{
- u8 *addr;
+ u8 *addr = PTR_ALIGN(walk->buffer, walk->alignmask + 1);
- addr = (u8 *)ALIGN((unsigned long)walk->buffer, walk->alignmask + 1);
- addr = skcipher_get_spot(addr, bsize);
scatterwalk_copychunks(addr, &walk->out, bsize, 1);
return 0;
}
/**
@@ -181,37 +169,26 @@ int skcipher_walk_done(struct skcipher_walk *walk, int res)
EXPORT_SYMBOL_GPL(skcipher_walk_done);
static int skcipher_next_slow(struct skcipher_walk *walk, unsigned int bsize)
{
unsigned alignmask = walk->alignmask;
- unsigned a;
unsigned n;
u8 *buffer;
if (!walk->buffer)
walk->buffer = walk->page;
buffer = walk->buffer;
- if (buffer)
- goto ok;
-
- /* Start with the minimum alignment of kmalloc. */
- a = crypto_tfm_ctx_alignment() - 1;
- n = bsize;
-
- /* Minimum size to align buffer by alignmask. */
- n += alignmask & ~a;
-
- /* Minimum size to ensure buffer does not straddle a page. */
- n += (bsize - 1) & ~(alignmask | a);
-
- buffer = kzalloc(n, skcipher_walk_gfp(walk));
- if (!buffer)
- return skcipher_walk_done(walk, -ENOMEM);
- walk->buffer = buffer;
-ok:
+ if (!buffer) {
+ /* Min size for a buffer of bsize bytes aligned to alignmask */
+ n = bsize + (alignmask & ~(crypto_tfm_ctx_alignment() - 1));
+
+ buffer = kzalloc(n, skcipher_walk_gfp(walk));
+ if (!buffer)
+ return skcipher_walk_done(walk, -ENOMEM);
+ walk->buffer = buffer;
+ }
walk->dst.virt.addr = PTR_ALIGN(buffer, alignmask + 1);
- walk->dst.virt.addr = skcipher_get_spot(walk->dst.virt.addr, bsize);
walk->src.virt.addr = walk->dst.virt.addr;
scatterwalk_copychunks(walk->src.virt.addr, &walk->in, bsize, 0);
walk->nbytes = bsize;
@@ -294,34 +271,25 @@ static int skcipher_walk_next(struct skcipher_walk *walk)
return skcipher_next_fast(walk);
}
static int skcipher_copy_iv(struct skcipher_walk *walk)
{
- unsigned a = crypto_tfm_ctx_alignment() - 1;
unsigned alignmask = walk->alignmask;
unsigned ivsize = walk->ivsize;
- unsigned bs = walk->stride;
- unsigned aligned_bs;
+ unsigned aligned_stride = ALIGN(walk->stride, alignmask + 1);
unsigned size;
u8 *iv;
- aligned_bs = ALIGN(bs, alignmask + 1);
-
- /* Minimum size to align buffer by alignmask. */
- size = alignmask & ~a;
-
- size += aligned_bs + ivsize;
-
- /* Minimum size to ensure buffer does not straddle a page. */
- size += (bs - 1) & ~(alignmask | a);
+ /* Min size for a buffer of stride + ivsize, aligned to alignmask */
+ size = aligned_stride + ivsize +
+ (alignmask & ~(crypto_tfm_ctx_alignment() - 1));
walk->buffer = kmalloc(size, skcipher_walk_gfp(walk));
if (!walk->buffer)
return -ENOMEM;
- iv = PTR_ALIGN(walk->buffer, alignmask + 1);
- iv = skcipher_get_spot(iv, bs) + aligned_bs;
+ iv = PTR_ALIGN(walk->buffer, alignmask + 1) + aligned_stride;
walk->iv = memcpy(iv, walk->iv, walk->ivsize);
return 0;
}
--
2.47.1
next prev parent reply other threads:[~2024-12-30 0:15 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-30 0:13 [PATCH v2 00/29] crypto: scatterlist handling improvements Eric Biggers
2024-12-30 0:13 ` [PATCH v2 01/29] crypto: skcipher - document skcipher_walk_done() and rename some vars Eric Biggers
2024-12-30 0:13 ` Eric Biggers [this message]
2024-12-30 0:13 ` [PATCH v2 03/29] crypto: skcipher - remove redundant clamping to page size Eric Biggers
2024-12-30 0:13 ` [PATCH v2 04/29] crypto: skcipher - remove redundant check for SKCIPHER_WALK_SLOW Eric Biggers
2024-12-30 0:13 ` [PATCH v2 05/29] crypto: skcipher - fold skcipher_walk_skcipher() into skcipher_walk_virt() Eric Biggers
2024-12-30 0:13 ` [PATCH v2 06/29] crypto: skcipher - clean up initialization of skcipher_walk::flags Eric Biggers
2024-12-30 0:13 ` [PATCH v2 07/29] crypto: skcipher - optimize initializing skcipher_walk fields Eric Biggers
2024-12-30 0:13 ` [PATCH v2 08/29] crypto: skcipher - call cond_resched() directly Eric Biggers
2024-12-30 0:13 ` [PATCH v2 09/29] crypto: omap - switch from scatter_walk to plain offset Eric Biggers
2024-12-30 0:13 ` [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data Eric Biggers
2025-01-02 11:50 ` Christophe Leroy
2025-01-02 17:24 ` Eric Biggers
2024-12-30 0:14 ` [PATCH v2 11/29] crypto: scatterwalk - move to next sg entry just in time Eric Biggers
2024-12-30 0:14 ` [PATCH v2 12/29] crypto: scatterwalk - add new functions for skipping data Eric Biggers
2024-12-30 0:14 ` [PATCH v2 13/29] crypto: scatterwalk - add new functions for iterating through data Eric Biggers
2024-12-30 0:14 ` [PATCH v2 14/29] crypto: scatterwalk - add new functions for copying data Eric Biggers
2024-12-30 0:14 ` [PATCH v2 15/29] crypto: scatterwalk - add scatterwalk_get_sglist() Eric Biggers
2024-12-30 0:14 ` [PATCH v2 16/29] crypto: skcipher - use scatterwalk_start_at_pos() Eric Biggers
2024-12-30 0:14 ` [PATCH v2 17/29] crypto: aegis - use the new scatterwalk functions Eric Biggers
2024-12-30 0:14 ` [PATCH v2 18/29] crypto: arm/ghash " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 19/29] crypto: arm64 " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 20/29] crypto: nx " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 21/29] crypto: s390/aes-gcm " Eric Biggers
2025-01-08 15:06 ` Harald Freudenberger
2024-12-30 0:14 ` [PATCH v2 22/29] crypto: s5p-sss " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 23/29] crypto: stm32 " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 24/29] crypto: x86/aes-gcm " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 25/29] crypto: x86/aegis " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 26/29] net/tls: " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 27/29] crypto: skcipher - " Eric Biggers
2024-12-30 0:14 ` [PATCH v2 28/29] crypto: scatterwalk - remove obsolete functions Eric Biggers
2024-12-30 0:14 ` [PATCH v2 29/29] crypto: scatterwalk - don't split at page boundaries when !HIGHMEM Eric Biggers
2024-12-30 1:31 ` [PATCH v2 00/29] crypto: scatterlist handling improvements Eric Biggers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241230001418.74739-3-ebiggers@kernel.org \
--to=ebiggers@kernel.org \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).