From: Eric Biggers <ebiggers@kernel.org>
To: Nathan Huckleberry <nhuck@google.com>
Cc: linux-crypto@vger.kernel.org,
Herbert Xu <herbert@gondor.apana.org.au>,
"David S. Miller" <davem@davemloft.net>,
linux-arm-kernel@lists.infradead.org,
Paul Crowley <paulcrowley@google.com>,
Sami Tolvanen <samitolvanen@google.com>,
Ard Biesheuvel <ardb@kernel.org>
Subject: Re: [PATCH v4 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR
Date: Thu, 14 Apr 2022 00:00:51 -0700 [thread overview]
Message-ID: <YlfGo8wSXS58mKmL@sol.localdomain> (raw)
In-Reply-To: <20220412172816.917723-5-nhuck@google.com>
A few initial comments, I'll take a closer look at the .S file soon...
On Tue, Apr 12, 2022 at 05:28:12PM +0000, Nathan Huckleberry wrote:
> Add hardware accelerated versions of XCTR for x86-64 CPUs with AESNI
> support. These implementations are modified versions of the CTR
> implementations found in aesni-intel_asm.S and aes_ctrby8_avx-x86_64.S.
>
> More information on XCTR can be found in the HCTR2 paper:
> Length-preserving encryption with HCTR2:
> https://enterprint.iacr.org/2021/1441.pdf
The above link doesn't work.
> +#ifdef __x86_64__
> +/*
> + * void aesni_xctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
> + * size_t len, u8 *iv, int byte_ctr)
> + */
This prototype doesn't match the one declared in the .c file.
> +
> +asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys, u8
> + *out, unsigned int num_bytes, unsigned int byte_ctr);
> +
> +asmlinkage void aes_xctr_enc_192_avx_by8(const u8 *in, u8 *iv, void *keys, u8
> + *out, unsigned int num_bytes, unsigned int byte_ctr);
> +
> +asmlinkage void aes_xctr_enc_256_avx_by8(const u8 *in, u8 *iv, void *keys, u8
> + *out, unsigned int num_bytes, unsigned int byte_ctr);
Please don't have line breaks between parameter types and their names.
These should look like:
asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys,
u8 *out, unsigned int num_bytes, unsigned int byte_ctr);
Also, why aren't the keys const?
> +static void aesni_xctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out, const u8
> + *in, unsigned int len, u8 *iv, unsigned int
> + byte_ctr)
> +{
> + if (ctx->key_length == AES_KEYSIZE_128)
> + aes_xctr_enc_128_avx_by8(in, iv, (void *)ctx, out, len,
> + byte_ctr);
> + else if (ctx->key_length == AES_KEYSIZE_192)
> + aes_xctr_enc_192_avx_by8(in, iv, (void *)ctx, out, len,
> + byte_ctr);
> + else
> + aes_xctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len,
> + byte_ctr);
> +}
Same comments above.
> +static int xctr_crypt(struct skcipher_request *req)
> +{
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> + u8 keystream[AES_BLOCK_SIZE];
> + u8 ctr[AES_BLOCK_SIZE];
> + struct skcipher_walk walk;
> + unsigned int nbytes;
> + unsigned int byte_ctr = 0;
> + int err;
> + __le32 ctr32;
> +
> + err = skcipher_walk_virt(&walk, req, false);
> +
> + while ((nbytes = walk.nbytes) > 0) {
> + kernel_fpu_begin();
> + if (nbytes & AES_BLOCK_MASK)
> + static_call(aesni_xctr_enc_tfm)(ctx, walk.dst.virt.addr,
> + walk.src.virt.addr, nbytes & AES_BLOCK_MASK,
> + walk.iv, byte_ctr);
> + nbytes &= ~AES_BLOCK_MASK;
> + byte_ctr += walk.nbytes - nbytes;
> +
> + if (walk.nbytes == walk.total && nbytes > 0) {
> + ctr32 = cpu_to_le32(byte_ctr / AES_BLOCK_SIZE + 1);
> + memcpy(ctr, walk.iv, AES_BLOCK_SIZE);
> + crypto_xor(ctr, (u8 *)&ctr32, sizeof(ctr32));
> + aesni_enc(ctx, keystream, ctr);
> + crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes -
> + nbytes, walk.src.virt.addr + walk.nbytes
> + - nbytes, keystream, nbytes);
> + byte_ctr += nbytes;
> + nbytes = 0;
> + }
For the final block case, it would be a bit simpler to do something like this:
__le32 block[AES_BLOCK_SIZE / sizeof(__le32)]
...
memcpy(block, walk.iv, AES_BLOCK_SIZE);
block[0] ^= cpu_to_le32(1 + byte_ctr / AES_BLOCK_SIZE);
aesni_enc(ctx, (u8 *)block, (u8 *)block);
I.e., have one buffer, use a regular XOR instead of crypto_xor(), and encrypt it
in-place.
> @@ -1162,6 +1249,8 @@ static int __init aesni_init(void)
> /* optimize performance of ctr mode encryption transform */
> static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm);
> pr_info("AES CTR mode by8 optimization enabled\n");
> + static_call_update(aesni_xctr_enc_tfm, aesni_xctr_enc_avx_tfm);
> + pr_info("AES XCTR mode by8 optimization enabled\n");
> }
Please don't add the log message above, as it would get printed at every boot-up
on most x86 systems, and it's not important enough for that. The existing
message "AES CTR mode ..." shouldn't really exist in the first place.
- Eric
next prev parent reply other threads:[~2022-04-14 7:01 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-12 17:28 [PATCH v4 0/8] crypto: HCTR2 support Nathan Huckleberry
2022-04-12 17:28 ` [PATCH v4 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
2022-04-18 19:03 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 2/8] crypto: polyval - Add POLYVAL support Nathan Huckleberry
2022-04-18 19:25 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 3/8] crypto: hctr2 - Add HCTR2 support Nathan Huckleberry
2022-04-13 4:20 ` Eric Biggers
2022-04-18 20:46 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR Nathan Huckleberry
2022-04-14 7:00 ` Eric Biggers [this message]
2022-04-18 23:44 ` Eric Biggers
2022-04-19 0:13 ` Eric Biggers
2022-04-21 21:59 ` Nathan Huckleberry
2022-04-21 22:29 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 5/8] crypto: arm64/aes-xctr: " Nathan Huckleberry
2022-04-19 4:33 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL Nathan Huckleberry
2022-04-13 5:18 ` Eric Biggers
2022-04-18 21:36 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 7/8] crypto: arm64/polyval: Add PMULL " Nathan Huckleberry
2022-04-13 5:53 ` Eric Biggers
2022-04-12 17:28 ` [PATCH v4 8/8] fscrypt: Add HCTR2 support for filename encryption Nathan Huckleberry
2022-04-13 6:10 ` Eric Biggers
2022-04-13 6:16 ` Ard Biesheuvel
2022-04-14 7:12 ` Eric Biggers
2022-04-14 7:15 ` Ard Biesheuvel
2022-04-18 18:05 ` Eric Biggers
2022-04-14 14:18 ` [PATCH v4 0/8] crypto: HCTR2 support Ard Biesheuvel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YlfGo8wSXS58mKmL@sol.localdomain \
--to=ebiggers@kernel.org \
--cc=ardb@kernel.org \
--cc=davem@davemloft.net \
--cc=herbert@gondor.apana.org.au \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-crypto@vger.kernel.org \
--cc=nhuck@google.com \
--cc=paulcrowley@google.com \
--cc=samitolvanen@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).