linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] crypto: adiantum optimizations
@ 2023-10-10  5:59 Eric Biggers
  2023-10-10  5:59 ` [PATCH 1/4] crypto: adiantum - add fast path for single-page messages Eric Biggers
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Eric Biggers @ 2023-10-10  5:59 UTC (permalink / raw)
  To: linux-crypto

This series slightly improves the performance of adiantum encryption and
decryption on single-page messages.

Eric Biggers (4):
  crypto: adiantum - add fast path for single-page messages
  crypto: arm/nhpoly1305 - implement ->digest
  crypto: arm64/nhpoly1305 - implement ->digest
  crypto: x86/nhpoly1305 - implement ->digest

 arch/arm/crypto/nhpoly1305-neon-glue.c   |  9 ++++
 arch/arm64/crypto/nhpoly1305-neon-glue.c |  9 ++++
 arch/x86/crypto/nhpoly1305-avx2-glue.c   |  9 ++++
 arch/x86/crypto/nhpoly1305-sse2-glue.c   |  9 ++++
 crypto/adiantum.c                        | 65 +++++++++++++++++-------
 5 files changed, 83 insertions(+), 18 deletions(-)

base-commit: 8468516f9f93a41dc65158b6428a1a1039c68f20
-- 
2.42.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] crypto: adiantum - add fast path for single-page messages
  2023-10-10  5:59 [PATCH 0/4] crypto: adiantum optimizations Eric Biggers
@ 2023-10-10  5:59 ` Eric Biggers
  2023-10-10  5:59 ` [PATCH 2/4] crypto: arm/nhpoly1305 - implement ->digest Eric Biggers
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Eric Biggers @ 2023-10-10  5:59 UTC (permalink / raw)
  To: linux-crypto

From: Eric Biggers <ebiggers@google.com>

When the source scatterlist is a single page, optimize the first hash
step of adiantum to use crypto_shash_digest() instead of
init/update/final, and use the same local kmap for both hashing the bulk
part and loading the narrow part of the source data.

Likewise, when the destination scatterlist is a single page, optimize
the second hash step of adiantum to use crypto_shash_digest() instead of
init/update/final, and use the same local kmap for both hashing the bulk
part and storing the narrow part of the destination data.

In some cases these optimizations improve performance significantly.

Note: ideally, for optimal performance each architecture should
implement the full "adiantum(xchacha12,aes)" algorithm and fully
optimize the contiguous buffer case to use no indirect calls.  That's
not something I've gotten around to doing, though.  This commit just
makes a relatively small change that provides some benefit with the
existing template-based approach.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/adiantum.c | 65 ++++++++++++++++++++++++++++++++++-------------
 1 file changed, 47 insertions(+), 18 deletions(-)

diff --git a/crypto/adiantum.c b/crypto/adiantum.c
index c33ba22a66389..cd2b8f5042dc9 100644
--- a/crypto/adiantum.c
+++ b/crypto/adiantum.c
@@ -238,39 +238,35 @@ static void adiantum_hash_header(struct skcipher_request *req)
 
 	BUILD_BUG_ON(TWEAK_SIZE % POLY1305_BLOCK_SIZE != 0);
 	poly1305_core_blocks(&state, &tctx->header_hash_key, req->iv,
 			     TWEAK_SIZE / POLY1305_BLOCK_SIZE, 1);
 
 	poly1305_core_emit(&state, NULL, &rctx->header_hash);
 }
 
 /* Hash the left-hand part (the "bulk") of the message using NHPoly1305 */
 static int adiantum_hash_message(struct skcipher_request *req,
-				 struct scatterlist *sgl, le128 *digest)
+				 struct scatterlist *sgl, unsigned int nents,
+				 le128 *digest)
 {
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
 	struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
 	const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
 	struct shash_desc *hash_desc = &rctx->u.hash_desc;
 	struct sg_mapping_iter miter;
 	unsigned int i, n;
 	int err;
 
-	hash_desc->tfm = tctx->hash;
-
 	err = crypto_shash_init(hash_desc);
 	if (err)
 		return err;
 
-	sg_miter_start(&miter, sgl, sg_nents(sgl),
-		       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+	sg_miter_start(&miter, sgl, nents, SG_MITER_FROM_SG | SG_MITER_ATOMIC);
 	for (i = 0; i < bulk_len; i += n) {
 		sg_miter_next(&miter);
 		n = min_t(unsigned int, miter.length, bulk_len - i);
 		err = crypto_shash_update(hash_desc, miter.addr, n);
 		if (err)
 			break;
 	}
 	sg_miter_stop(&miter);
 	if (err)
 		return err;
@@ -278,80 +274,113 @@ static int adiantum_hash_message(struct skcipher_request *req,
 	return crypto_shash_final(hash_desc, (u8 *)digest);
 }
 
 /* Continue Adiantum encryption/decryption after the stream cipher step */
 static int adiantum_finish(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
 	struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
 	const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+	struct scatterlist *dst = req->dst;
+	const unsigned int dst_nents = sg_nents(dst);
 	le128 digest;
 	int err;
 
 	/* If decrypting, decrypt C_M with the block cipher to get P_M */
 	if (!rctx->enc)
 		crypto_cipher_decrypt_one(tctx->blockcipher, rctx->rbuf.bytes,
 					  rctx->rbuf.bytes);
 
 	/*
 	 * Second hash step
 	 *	enc: C_R = C_M - H_{K_H}(T, C_L)
 	 *	dec: P_R = P_M - H_{K_H}(T, P_L)
 	 */
-	err = adiantum_hash_message(req, req->dst, &digest);
-	if (err)
-		return err;
-	le128_add(&digest, &digest, &rctx->header_hash);
-	le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
-	scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->dst,
-				 bulk_len, BLOCKCIPHER_BLOCK_SIZE, 1);
+	rctx->u.hash_desc.tfm = tctx->hash;
+	le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
+	if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) {
+		/* Fast path for single-page destination */
+		void *virt = kmap_local_page(sg_page(dst)) + dst->offset;
+
+		err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
+					  (u8 *)&digest);
+		if (err) {
+			kunmap_local(virt);
+			return err;
+		}
+		le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
+		memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128));
+		kunmap_local(virt);
+	} else {
+		/* Slow path that works for any destination scatterlist */
+		err = adiantum_hash_message(req, dst, dst_nents, &digest);
+		if (err)
+			return err;
+		le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
+		scatterwalk_map_and_copy(&rctx->rbuf.bignum, dst,
+					 bulk_len, sizeof(le128), 1);
+	}
 	return 0;
 }
 
 static void adiantum_streamcipher_done(void *data, int err)
 {
 	struct skcipher_request *req = data;
 
 	if (!err)
 		err = adiantum_finish(req);
 
 	skcipher_request_complete(req, err);
 }
 
 static int adiantum_crypt(struct skcipher_request *req, bool enc)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
 	struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
 	const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+	struct scatterlist *src = req->src;
+	const unsigned int src_nents = sg_nents(src);
 	unsigned int stream_len;
 	le128 digest;
 	int err;
 
 	if (req->cryptlen < BLOCKCIPHER_BLOCK_SIZE)
 		return -EINVAL;
 
 	rctx->enc = enc;
 
 	/*
 	 * First hash step
 	 *	enc: P_M = P_R + H_{K_H}(T, P_L)
 	 *	dec: C_M = C_R + H_{K_H}(T, C_L)
 	 */
 	adiantum_hash_header(req);
-	err = adiantum_hash_message(req, req->src, &digest);
+	rctx->u.hash_desc.tfm = tctx->hash;
+	if (src_nents == 1 && src->offset + req->cryptlen <= PAGE_SIZE) {
+		/* Fast path for single-page source */
+		void *virt = kmap_local_page(sg_page(src)) + src->offset;
+
+		err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
+					  (u8 *)&digest);
+		memcpy(&rctx->rbuf.bignum, virt + bulk_len, sizeof(le128));
+		kunmap_local(virt);
+	} else {
+		/* Slow path that works for any source scatterlist */
+		err = adiantum_hash_message(req, src, src_nents, &digest);
+		scatterwalk_map_and_copy(&rctx->rbuf.bignum, src,
+					 bulk_len, sizeof(le128), 0);
+	}
 	if (err)
 		return err;
-	le128_add(&digest, &digest, &rctx->header_hash);
-	scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->src,
-				 bulk_len, BLOCKCIPHER_BLOCK_SIZE, 0);
+	le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
 	le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
 
 	/* If encrypting, encrypt P_M with the block cipher to get C_M */
 	if (enc)
 		crypto_cipher_encrypt_one(tctx->blockcipher, rctx->rbuf.bytes,
 					  rctx->rbuf.bytes);
 
 	/* Initialize the rest of the XChaCha IV (first part is C_M) */
 	BUILD_BUG_ON(BLOCKCIPHER_BLOCK_SIZE != 16);
 	BUILD_BUG_ON(XCHACHA_IV_SIZE != 32);	/* nonce || stream position */
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] crypto: arm/nhpoly1305 - implement ->digest
  2023-10-10  5:59 [PATCH 0/4] crypto: adiantum optimizations Eric Biggers
  2023-10-10  5:59 ` [PATCH 1/4] crypto: adiantum - add fast path for single-page messages Eric Biggers
@ 2023-10-10  5:59 ` Eric Biggers
  2023-10-10  5:59 ` [PATCH 3/4] crypto: arm64/nhpoly1305 " Eric Biggers
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Eric Biggers @ 2023-10-10  5:59 UTC (permalink / raw)
  To: linux-crypto

From: Eric Biggers <ebiggers@google.com>

Implement the ->digest method to improve performance on single-page
messages by reducing the number of indirect calls.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm/crypto/nhpoly1305-neon-glue.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm/crypto/nhpoly1305-neon-glue.c b/arch/arm/crypto/nhpoly1305-neon-glue.c
index e93e41ff26566..62cf7ccdde736 100644
--- a/arch/arm/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm/crypto/nhpoly1305-neon-glue.c
@@ -27,30 +27,39 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 
 		kernel_neon_begin();
 		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
 		kernel_neon_end();
 		src += n;
 		srclen -= n;
 	} while (srclen);
 	return 0;
 }
 
+static int nhpoly1305_neon_digest(struct shash_desc *desc,
+				  const u8 *src, unsigned int srclen, u8 *out)
+{
+	return crypto_nhpoly1305_init(desc) ?:
+	       nhpoly1305_neon_update(desc, src, srclen) ?:
+	       crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
 	.base.cra_name		= "nhpoly1305",
 	.base.cra_driver_name	= "nhpoly1305-neon",
 	.base.cra_priority	= 200,
 	.base.cra_ctxsize	= sizeof(struct nhpoly1305_key),
 	.base.cra_module	= THIS_MODULE,
 	.digestsize		= POLY1305_DIGEST_SIZE,
 	.init			= crypto_nhpoly1305_init,
 	.update			= nhpoly1305_neon_update,
 	.final			= crypto_nhpoly1305_final,
+	.digest			= nhpoly1305_neon_digest,
 	.setkey			= crypto_nhpoly1305_setkey,
 	.descsize		= sizeof(struct nhpoly1305_state),
 };
 
 static int __init nhpoly1305_mod_init(void)
 {
 	if (!(elf_hwcap & HWCAP_NEON))
 		return -ENODEV;
 
 	return crypto_register_shash(&nhpoly1305_alg);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] crypto: arm64/nhpoly1305 - implement ->digest
  2023-10-10  5:59 [PATCH 0/4] crypto: adiantum optimizations Eric Biggers
  2023-10-10  5:59 ` [PATCH 1/4] crypto: adiantum - add fast path for single-page messages Eric Biggers
  2023-10-10  5:59 ` [PATCH 2/4] crypto: arm/nhpoly1305 - implement ->digest Eric Biggers
@ 2023-10-10  5:59 ` Eric Biggers
  2023-10-10  5:59 ` [PATCH 4/4] crypto: x86/nhpoly1305 " Eric Biggers
  2023-10-20  5:52 ` [PATCH 0/4] crypto: adiantum optimizations Herbert Xu
  4 siblings, 0 replies; 6+ messages in thread
From: Eric Biggers @ 2023-10-10  5:59 UTC (permalink / raw)
  To: linux-crypto

From: Eric Biggers <ebiggers@google.com>

Implement the ->digest method to improve performance on single-page
messages by reducing the number of indirect calls.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/nhpoly1305-neon-glue.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index cd882c35d9252..e4a0b463f080e 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -27,30 +27,39 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
 
 		kernel_neon_begin();
 		crypto_nhpoly1305_update_helper(desc, src, n, nh_neon);
 		kernel_neon_end();
 		src += n;
 		srclen -= n;
 	} while (srclen);
 	return 0;
 }
 
+static int nhpoly1305_neon_digest(struct shash_desc *desc,
+				  const u8 *src, unsigned int srclen, u8 *out)
+{
+	return crypto_nhpoly1305_init(desc) ?:
+	       nhpoly1305_neon_update(desc, src, srclen) ?:
+	       crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
 	.base.cra_name		= "nhpoly1305",
 	.base.cra_driver_name	= "nhpoly1305-neon",
 	.base.cra_priority	= 200,
 	.base.cra_ctxsize	= sizeof(struct nhpoly1305_key),
 	.base.cra_module	= THIS_MODULE,
 	.digestsize		= POLY1305_DIGEST_SIZE,
 	.init			= crypto_nhpoly1305_init,
 	.update			= nhpoly1305_neon_update,
 	.final			= crypto_nhpoly1305_final,
+	.digest			= nhpoly1305_neon_digest,
 	.setkey			= crypto_nhpoly1305_setkey,
 	.descsize		= sizeof(struct nhpoly1305_state),
 };
 
 static int __init nhpoly1305_mod_init(void)
 {
 	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	return crypto_register_shash(&nhpoly1305_alg);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] crypto: x86/nhpoly1305 - implement ->digest
  2023-10-10  5:59 [PATCH 0/4] crypto: adiantum optimizations Eric Biggers
                   ` (2 preceding siblings ...)
  2023-10-10  5:59 ` [PATCH 3/4] crypto: arm64/nhpoly1305 " Eric Biggers
@ 2023-10-10  5:59 ` Eric Biggers
  2023-10-20  5:52 ` [PATCH 0/4] crypto: adiantum optimizations Herbert Xu
  4 siblings, 0 replies; 6+ messages in thread
From: Eric Biggers @ 2023-10-10  5:59 UTC (permalink / raw)
  To: linux-crypto

From: Eric Biggers <ebiggers@google.com>

Implement the ->digest method to improve performance on single-page
messages by reducing the number of indirect calls.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/nhpoly1305-avx2-glue.c | 9 +++++++++
 arch/x86/crypto/nhpoly1305-sse2-glue.c | 9 +++++++++
 2 files changed, 18 insertions(+)

diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c
index 46b036204ed91..c3a872f4d6a77 100644
--- a/arch/x86/crypto/nhpoly1305-avx2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c
@@ -27,30 +27,39 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
 
 		kernel_fpu_begin();
 		crypto_nhpoly1305_update_helper(desc, src, n, nh_avx2);
 		kernel_fpu_end();
 		src += n;
 		srclen -= n;
 	} while (srclen);
 	return 0;
 }
 
+static int nhpoly1305_avx2_digest(struct shash_desc *desc,
+				  const u8 *src, unsigned int srclen, u8 *out)
+{
+	return crypto_nhpoly1305_init(desc) ?:
+	       nhpoly1305_avx2_update(desc, src, srclen) ?:
+	       crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
 	.base.cra_name		= "nhpoly1305",
 	.base.cra_driver_name	= "nhpoly1305-avx2",
 	.base.cra_priority	= 300,
 	.base.cra_ctxsize	= sizeof(struct nhpoly1305_key),
 	.base.cra_module	= THIS_MODULE,
 	.digestsize		= POLY1305_DIGEST_SIZE,
 	.init			= crypto_nhpoly1305_init,
 	.update			= nhpoly1305_avx2_update,
 	.final			= crypto_nhpoly1305_final,
+	.digest			= nhpoly1305_avx2_digest,
 	.setkey			= crypto_nhpoly1305_setkey,
 	.descsize		= sizeof(struct nhpoly1305_state),
 };
 
 static int __init nhpoly1305_mod_init(void)
 {
 	if (!boot_cpu_has(X86_FEATURE_AVX2) ||
 	    !boot_cpu_has(X86_FEATURE_OSXSAVE))
 		return -ENODEV;
 
diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c
index 4a4970d751076..a268a8439a5c9 100644
--- a/arch/x86/crypto/nhpoly1305-sse2-glue.c
+++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c
@@ -27,30 +27,39 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
 
 		kernel_fpu_begin();
 		crypto_nhpoly1305_update_helper(desc, src, n, nh_sse2);
 		kernel_fpu_end();
 		src += n;
 		srclen -= n;
 	} while (srclen);
 	return 0;
 }
 
+static int nhpoly1305_sse2_digest(struct shash_desc *desc,
+				  const u8 *src, unsigned int srclen, u8 *out)
+{
+	return crypto_nhpoly1305_init(desc) ?:
+	       nhpoly1305_sse2_update(desc, src, srclen) ?:
+	       crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
 	.base.cra_name		= "nhpoly1305",
 	.base.cra_driver_name	= "nhpoly1305-sse2",
 	.base.cra_priority	= 200,
 	.base.cra_ctxsize	= sizeof(struct nhpoly1305_key),
 	.base.cra_module	= THIS_MODULE,
 	.digestsize		= POLY1305_DIGEST_SIZE,
 	.init			= crypto_nhpoly1305_init,
 	.update			= nhpoly1305_sse2_update,
 	.final			= crypto_nhpoly1305_final,
+	.digest			= nhpoly1305_sse2_digest,
 	.setkey			= crypto_nhpoly1305_setkey,
 	.descsize		= sizeof(struct nhpoly1305_state),
 };
 
 static int __init nhpoly1305_mod_init(void)
 {
 	if (!boot_cpu_has(X86_FEATURE_XMM2))
 		return -ENODEV;
 
 	return crypto_register_shash(&nhpoly1305_alg);
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/4] crypto: adiantum optimizations
  2023-10-10  5:59 [PATCH 0/4] crypto: adiantum optimizations Eric Biggers
                   ` (3 preceding siblings ...)
  2023-10-10  5:59 ` [PATCH 4/4] crypto: x86/nhpoly1305 " Eric Biggers
@ 2023-10-20  5:52 ` Herbert Xu
  4 siblings, 0 replies; 6+ messages in thread
From: Herbert Xu @ 2023-10-20  5:52 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto

Eric Biggers <ebiggers@kernel.org> wrote:
> This series slightly improves the performance of adiantum encryption and
> decryption on single-page messages.
> 
> Eric Biggers (4):
>  crypto: adiantum - add fast path for single-page messages
>  crypto: arm/nhpoly1305 - implement ->digest
>  crypto: arm64/nhpoly1305 - implement ->digest
>  crypto: x86/nhpoly1305 - implement ->digest
> 
> arch/arm/crypto/nhpoly1305-neon-glue.c   |  9 ++++
> arch/arm64/crypto/nhpoly1305-neon-glue.c |  9 ++++
> arch/x86/crypto/nhpoly1305-avx2-glue.c   |  9 ++++
> arch/x86/crypto/nhpoly1305-sse2-glue.c   |  9 ++++
> crypto/adiantum.c                        | 65 +++++++++++++++++-------
> 5 files changed, 83 insertions(+), 18 deletions(-)
> 
> base-commit: 8468516f9f93a41dc65158b6428a1a1039c68f20

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-10-20  5:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-10  5:59 [PATCH 0/4] crypto: adiantum optimizations Eric Biggers
2023-10-10  5:59 ` [PATCH 1/4] crypto: adiantum - add fast path for single-page messages Eric Biggers
2023-10-10  5:59 ` [PATCH 2/4] crypto: arm/nhpoly1305 - implement ->digest Eric Biggers
2023-10-10  5:59 ` [PATCH 3/4] crypto: arm64/nhpoly1305 " Eric Biggers
2023-10-10  5:59 ` [PATCH 4/4] crypto: x86/nhpoly1305 " Eric Biggers
2023-10-20  5:52 ` [PATCH 0/4] crypto: adiantum optimizations Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).