* [PATCH 0/3] crypto: x86/aesni - Improve XTS data type
@ 2023-09-25 15:17 Chang S. Bae
2023-09-25 15:17 ` [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Chang S. Bae @ 2023-09-25 15:17 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
The field within the struct aesni_xts_ctx is currently defined as a
byte array, sized to match the struct crypto_aes_ctx. However, it
actually represents the struct data type.
To accurately redefine the data type, some adjustments have to be made
to the address alignment code. It involved refactoring the common
alignment code initially, followed by updating the structure's
definition. Finally, the XTS alignment is now performed early in the
process, rather than at every access point.
This change was suggested during Eric's review of another series
intended to enable an alternative AES implementation [1][2]. I viewed
it as an enhancement to the mainline, independent of the series.
I have divided these changes into incremental pieces, making them
considerably more reviewable and maintainable.
The series is based on the cryptodev's master branch:
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
Thanks,
Chang
[1] https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
[2] https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
Chang S. Bae (3):
crypto: x86/aesni - Refactor the common address alignment code
crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
crypto: x86/aesni - Perform address alignment early for XTS mode
arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++----------------
1 file changed, 25 insertions(+), 27 deletions(-)
base-commit: 1c43c0f1f84aa59dfc98ce66f0a67b2922aa7f9d
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code
2023-09-25 15:17 [PATCH 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
@ 2023-09-25 15:17 ` Chang S. Bae
2023-09-26 5:06 ` Eric Biggers
2023-09-25 15:17 ` [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Chang S. Bae @ 2023-09-25 15:17 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
The address alignment code has been duplicated for each mode. Instead
of duplicating the same code, refactor the alignment code and simplify
the alignment helpers.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Link: https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
---
arch/x86/crypto/aesni-intel_glue.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 39d6a62ac627..241d38ae1ed9 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -80,6 +80,13 @@ struct gcm_context_data {
u8 hash_keys[GCM_BLOCK_LEN * 16];
};
+static inline void *aes_align_addr(void *addr)
+{
+ if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
+ return addr;
+ return PTR_ALIGN(addr, AESNI_ALIGN);
+}
+
asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
unsigned int key_len);
asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
@@ -201,32 +208,19 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2);
static inline struct
aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return (struct aesni_rfc4106_gcm_ctx *)aes_align_addr(crypto_aead_ctx(tfm));
}
static inline struct
generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return (struct generic_gcmaes_ctx *)aes_align_addr(crypto_aead_ctx(tfm));
}
#endif
static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
{
- unsigned long addr = (unsigned long)raw_ctx;
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return (struct crypto_aes_ctx *)ALIGN(addr, align);
+ return (struct crypto_aes_ctx *)aes_align_addr(raw_ctx);
}
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
2023-09-25 15:17 [PATCH 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
2023-09-25 15:17 ` [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
@ 2023-09-25 15:17 ` Chang S. Bae
2023-09-26 5:20 ` Eric Biggers
2023-09-25 15:17 ` [PATCH 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
3 siblings, 1 reply; 11+ messages in thread
From: Chang S. Bae @ 2023-09-25 15:17 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
Currently, every field in struct aesni_xts_ctx is defined as a byte
array of the same size as struct crypto_aes_ctx. This data type
is obscure and the choice lacks justification.
To rectify this, update the field type in struct aesni_xts_ctx to
match its actual structure.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
---
arch/x86/crypto/aesni-intel_glue.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 241d38ae1ed9..412a99e914a6 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -61,8 +61,8 @@ struct generic_gcmaes_ctx {
};
struct aesni_xts_ctx {
- u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
- u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR;
};
#define GCM_BLOCK_LEN 16
@@ -885,13 +885,12 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen);
+ err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen,
- keylen);
+ return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
@@ -933,7 +932,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
@@ -942,11 +941,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
@@ -974,11 +973,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode
2023-09-25 15:17 [PATCH 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
2023-09-25 15:17 ` [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
2023-09-25 15:17 ` [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
@ 2023-09-25 15:17 ` Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
3 siblings, 0 replies; 11+ messages in thread
From: Chang S. Bae @ 2023-09-25 15:17 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
Currently, the alignment of each field in struct aesni_xts_ctx occurs
right before every access. However, it's possible to perform this
alignment ahead of time.
Introduce a helper function that converts struct crypto_skcipher *tfm
to struct aesni_xts_ctx *ctx and returns an aligned address. Utilize
this helper function at the beginning of each XTS function and then
eliminate redundant alignment code.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
---
arch/x86/crypto/aesni-intel_glue.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 412a99e914a6..b344652510a3 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -223,6 +223,11 @@ static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
return (struct crypto_aes_ctx *)aes_align_addr(raw_ctx);
}
+static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
+{
+ return (struct aesni_xts_ctx *)aes_align_addr(crypto_skcipher_ctx(tfm));
+}
+
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
const u8 *in_key, unsigned int key_len)
{
@@ -875,7 +880,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
@@ -885,18 +890,18 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen);
+ err = aes_set_key_common(&ctx->crypt_ctx, key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen);
+ return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int tail = req->cryptlen % AES_BLOCK_SIZE;
struct skcipher_request subreq;
struct skcipher_walk walk;
@@ -932,7 +937,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
@@ -941,11 +946,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
@@ -973,11 +978,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code
2023-09-25 15:17 ` [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
@ 2023-09-26 5:06 ` Eric Biggers
0 siblings, 0 replies; 11+ messages in thread
From: Eric Biggers @ 2023-09-26 5:06 UTC (permalink / raw)
To: Chang S. Bae; +Cc: linux-kernel, linux-crypto, herbert, davem, x86
On Mon, Sep 25, 2023 at 08:17:50AM -0700, Chang S. Bae wrote:
> The address alignment code has been duplicated for each mode. Instead
> of duplicating the same code, refactor the alignment code and simplify
> the alignment helpers.
>
> Suggested-by: Eric Biggers <ebiggers@kernel.org>
> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
> Cc: linux-crypto@vger.kernel.org
> Cc: x86@kernel.org
> Cc: linux-kernel@vger.kernel.org
> Link: https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
> ---
> arch/x86/crypto/aesni-intel_glue.c | 26 ++++++++++----------------
> 1 file changed, 10 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
> index 39d6a62ac627..241d38ae1ed9 100644
> --- a/arch/x86/crypto/aesni-intel_glue.c
> +++ b/arch/x86/crypto/aesni-intel_glue.c
> @@ -80,6 +80,13 @@ struct gcm_context_data {
> u8 hash_keys[GCM_BLOCK_LEN * 16];
> };
>
> +static inline void *aes_align_addr(void *addr)
> +{
> + if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
> + return addr;
> + return PTR_ALIGN(addr, AESNI_ALIGN);
> +}
> +
> asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
> unsigned int key_len);
> asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
> @@ -201,32 +208,19 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2);
> static inline struct
> aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
> {
> - unsigned long align = AESNI_ALIGN;
> -
> - if (align <= crypto_tfm_ctx_alignment())
> - align = 1;
> - return PTR_ALIGN(crypto_aead_ctx(tfm), align);
> + return (struct aesni_rfc4106_gcm_ctx *)aes_align_addr(crypto_aead_ctx(tfm));
> }
>
> static inline struct
> generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
> {
> - unsigned long align = AESNI_ALIGN;
> -
> - if (align <= crypto_tfm_ctx_alignment())
> - align = 1;
> - return PTR_ALIGN(crypto_aead_ctx(tfm), align);
> + return (struct generic_gcmaes_ctx *)aes_align_addr(crypto_aead_ctx(tfm));
> }
> #endif
>
> static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
> {
> - unsigned long addr = (unsigned long)raw_ctx;
> - unsigned long align = AESNI_ALIGN;
> -
> - if (align <= crypto_tfm_ctx_alignment())
> - align = 1;
> - return (struct crypto_aes_ctx *)ALIGN(addr, align);
> + return (struct crypto_aes_ctx *)aes_align_addr(raw_ctx);
> }
The casts can be dropped, since aes_align_addr() returns 'void *'.
- Eric
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
2023-09-25 15:17 ` [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
@ 2023-09-26 5:20 ` Eric Biggers
0 siblings, 0 replies; 11+ messages in thread
From: Eric Biggers @ 2023-09-26 5:20 UTC (permalink / raw)
To: Chang S. Bae; +Cc: linux-kernel, linux-crypto, herbert, davem, x86
On Mon, Sep 25, 2023 at 08:17:51AM -0700, Chang S. Bae wrote:
> Currently, every field in struct aesni_xts_ctx is defined as a byte
> array of the same size as struct crypto_aes_ctx. This data type
> is obscure and the choice lacks justification.
>
> To rectify this, update the field type in struct aesni_xts_ctx to
> match its actual structure.
>
> Suggested-by: Eric Biggers <ebiggers@kernel.org>
> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
> Cc: linux-crypto@vger.kernel.org
> Cc: x86@kernel.org
> Cc: linux-kernel@vger.kernel.org
> Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
Please put the "Link" directly after the Suggested-by to make it clear that the
link is for the suggestion. Thanks!
- Eric
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type
2023-09-25 15:17 [PATCH 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
` (2 preceding siblings ...)
2023-09-25 15:17 ` [PATCH 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Chang S. Bae
@ 2023-09-28 7:25 ` Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
` (3 more replies)
3 siblings, 4 replies; 11+ messages in thread
From: Chang S. Bae @ 2023-09-28 7:25 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
V1->V2:
* Drop the unnecessary casts (Eric).
* Put the 'Link:' tag right after 'Suggested-by' (Eric).
---
The field within the struct aesni_xts_ctx is currently defined as a
byte array, sized to match the struct crypto_aes_ctx. However, it
actually represents the struct data type.
To accurately redefine the data type, some adjustments have to be made
to the address alignment code. It involved refactoring the common
alignment code initially, followed by updating the structure's
definition. Finally, the XTS alignment is now performed early in the
process, rather than at every access point.
This change was suggested during Eric's review of another series
intended to enable an alternative AES implementation [1][2]. I viewed
it as an enhancement to the mainline, independent of the series.
I have divided these changes into incremental pieces, making them
considerably more reviewable and maintainable.
The series is based on the cryptodev's master branch:
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
Thanks,
Chang
[1] https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
[2] https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
Chang S. Bae (3):
crypto: x86/aesni - Refactor the common address alignment code
crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
crypto: x86/aesni - Perform address alignment early for XTS mode
arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++----------------
1 file changed, 25 insertions(+), 27 deletions(-)
base-commit: 1c43c0f1f84aa59dfc98ce66f0a67b2922aa7f9d
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/3] crypto: x86/aesni - Refactor the common address alignment code
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
@ 2023-09-28 7:25 ` Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Chang S. Bae @ 2023-09-28 7:25 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
The address alignment code has been duplicated for each mode. Instead
of duplicating the same code, refactor the alignment code and simplify
the alignment helpers.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Link: https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
V1->V2: drop the casts (Eric).
---
arch/x86/crypto/aesni-intel_glue.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 39d6a62ac627..308deeb0c974 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -80,6 +80,13 @@ struct gcm_context_data {
u8 hash_keys[GCM_BLOCK_LEN * 16];
};
+static inline void *aes_align_addr(void *addr)
+{
+ if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
+ return addr;
+ return PTR_ALIGN(addr, AESNI_ALIGN);
+}
+
asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
unsigned int key_len);
asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
@@ -201,32 +208,19 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2);
static inline struct
aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return aes_align_addr(crypto_aead_ctx(tfm));
}
static inline struct
generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return aes_align_addr(crypto_aead_ctx(tfm));
}
#endif
static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
{
- unsigned long addr = (unsigned long)raw_ctx;
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return (struct crypto_aes_ctx *)ALIGN(addr, align);
+ return aes_align_addr(raw_ctx);
}
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
@ 2023-09-28 7:25 ` Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Chang S. Bae
2023-10-05 10:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Herbert Xu
3 siblings, 0 replies; 11+ messages in thread
From: Chang S. Bae @ 2023-09-28 7:25 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
Currently, every field in struct aesni_xts_ctx is defined as a byte
array of the same size as struct crypto_aes_ctx. This data type
is obscure and the choice lacks justification.
To rectify this, update the field type in struct aesni_xts_ctx to
match its actual structure.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
arch/x86/crypto/aesni-intel_glue.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 308deeb0c974..80e28a01aa3a 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -61,8 +61,8 @@ struct generic_gcmaes_ctx {
};
struct aesni_xts_ctx {
- u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
- u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR;
};
#define GCM_BLOCK_LEN 16
@@ -885,13 +885,12 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen);
+ err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen,
- keylen);
+ return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
@@ -933,7 +932,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
@@ -942,11 +941,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
@@ -974,11 +973,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
@ 2023-09-28 7:25 ` Chang S. Bae
2023-10-05 10:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Herbert Xu
3 siblings, 0 replies; 11+ messages in thread
From: Chang S. Bae @ 2023-09-28 7:25 UTC (permalink / raw)
To: linux-kernel, linux-crypto; +Cc: herbert, davem, ebiggers, x86, chang.seok.bae
Currently, the alignment of each field in struct aesni_xts_ctx occurs
right before every access. However, it's possible to perform this
alignment ahead of time.
Introduce a helper function that converts struct crypto_skcipher *tfm
to struct aesni_xts_ctx *ctx and returns an aligned address. Utilize
this helper function at the beginning of each XTS function and then
eliminate redundant alignment code.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
V1->V2: drop the cast (Eric).
---
arch/x86/crypto/aesni-intel_glue.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 80e28a01aa3a..b1d90c25975a 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -223,6 +223,11 @@ static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
return aes_align_addr(raw_ctx);
}
+static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
+{
+ return aes_align_addr(crypto_skcipher_ctx(tfm));
+}
+
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
const u8 *in_key, unsigned int key_len)
{
@@ -875,7 +880,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
@@ -885,18 +890,18 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen);
+ err = aes_set_key_common(&ctx->crypt_ctx, key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen);
+ return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int tail = req->cryptlen % AES_BLOCK_SIZE;
struct skcipher_request subreq;
struct skcipher_walk walk;
@@ -932,7 +937,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
@@ -941,11 +946,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
@@ -973,11 +978,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
` (2 preceding siblings ...)
2023-09-28 7:25 ` [PATCH v2 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Chang S. Bae
@ 2023-10-05 10:25 ` Herbert Xu
3 siblings, 0 replies; 11+ messages in thread
From: Herbert Xu @ 2023-10-05 10:25 UTC (permalink / raw)
To: Chang S. Bae; +Cc: linux-kernel, linux-crypto, davem, ebiggers, x86
On Thu, Sep 28, 2023 at 12:25:05AM -0700, Chang S. Bae wrote:
> V1->V2:
> * Drop the unnecessary casts (Eric).
> * Put the 'Link:' tag right after 'Suggested-by' (Eric).
>
> ---
>
> The field within the struct aesni_xts_ctx is currently defined as a
> byte array, sized to match the struct crypto_aes_ctx. However, it
> actually represents the struct data type.
>
> To accurately redefine the data type, some adjustments have to be made
> to the address alignment code. It involved refactoring the common
> alignment code initially, followed by updating the structure's
> definition. Finally, the XTS alignment is now performed early in the
> process, rather than at every access point.
>
> This change was suggested during Eric's review of another series
> intended to enable an alternative AES implementation [1][2]. I viewed
> it as an enhancement to the mainline, independent of the series.
>
> I have divided these changes into incremental pieces, making them
> considerably more reviewable and maintainable.
>
> The series is based on the cryptodev's master branch:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
>
> Thanks,
> Chang
>
> [1] https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
> [2] https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
>
> Chang S. Bae (3):
> crypto: x86/aesni - Refactor the common address alignment code
> crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
> crypto: x86/aesni - Perform address alignment early for XTS mode
>
> arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++----------------
> 1 file changed, 25 insertions(+), 27 deletions(-)
>
>
> base-commit: 1c43c0f1f84aa59dfc98ce66f0a67b2922aa7f9d
> --
> 2.34.1
All applied. Thanks.
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-10-05 17:30 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-25 15:17 [PATCH 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
2023-09-25 15:17 ` [PATCH 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
2023-09-26 5:06 ` Eric Biggers
2023-09-25 15:17 ` [PATCH 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
2023-09-26 5:20 ` Eric Biggers
2023-09-25 15:17 ` [PATCH 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 1/3] crypto: x86/aesni - Refactor the common address alignment code Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 2/3] crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx Chang S. Bae
2023-09-28 7:25 ` [PATCH v2 3/3] crypto: x86/aesni - Perform address alignment early for XTS mode Chang S. Bae
2023-10-05 10:25 ` [PATCH v2 0/3] crypto: x86/aesni - Improve XTS data type Herbert Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).