* [PATCH 0/5] lib/crypto: Poly1305 fixes
@ 2025-07-06 23:10 Eric Biggers
2025-07-06 23:10 ` [PATCH 1/5] lib/crypto: arm/poly1305: Remove unneeded empty weak function Eric Biggers
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-06 23:10 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86, Eric Biggers
This series is also available at:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git poly1305-fixes
This series fixes the arm, arm64, and x86 Poly1305 functions to not
corrupt random tasks' registers when called in the "wrong" context. It
also fixes a performance regression on x86 with short messages.
This series is needed for my upcoming poly1305_kunit test to pass.
Eric Biggers (5):
lib/crypto: arm/poly1305: Remove unneeded empty weak function
lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts
lib/crypto: arm64/poly1305: Fix register corruption in no-SIMD
contexts
lib/crypto: x86/poly1305: Fix register corruption in no-SIMD contexts
lib/crypto: x86/poly1305: Fix performance regression on short messages
lib/crypto/arm/poly1305-glue.c | 8 ++----
lib/crypto/arm64/poly1305-glue.c | 3 +-
lib/crypto/x86/poly1305_glue.c | 48 +++++++++++++++++++++++++++++++-
3 files changed, 51 insertions(+), 8 deletions(-)
base-commit: f1da28dfadd26ef95bbd0b1ddf066e7ffe1505ff
--
2.50.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/5] lib/crypto: arm/poly1305: Remove unneeded empty weak function
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
@ 2025-07-06 23:10 ` Eric Biggers
2025-07-06 23:10 ` [PATCH 2/5] lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts Eric Biggers
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-06 23:10 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86, Eric Biggers
The __weak and empty definition of poly1305_blocks_neon() was a
workaround to prevent link errors when CONFIG_KERNEL_MODE_NEON=n, as
compilers didn't always optimize out the call.
This call is now guarded by IS_ENABLED(CONFIG_KERNEL_MODE_NEON). That
guarantees the call is removed at compile time when NEON support is
disabled. Therefore, the workaround is no longer needed.
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
---
lib/crypto/arm/poly1305-glue.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/lib/crypto/arm/poly1305-glue.c b/lib/crypto/arm/poly1305-glue.c
index 2603b0771f2c..5b65b840c166 100644
--- a/lib/crypto/arm/poly1305-glue.c
+++ b/lib/crypto/arm/poly1305-glue.c
@@ -25,15 +25,10 @@ asmlinkage void poly1305_blocks_neon(struct poly1305_block_state *state,
asmlinkage void poly1305_emit_arch(const struct poly1305_state *state,
u8 digest[POLY1305_DIGEST_SIZE],
const u32 nonce[4]);
EXPORT_SYMBOL_GPL(poly1305_emit_arch);
-void __weak poly1305_blocks_neon(struct poly1305_block_state *state,
- const u8 *src, u32 len, u32 hibit)
-{
-}
-
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src,
unsigned int len, u32 padbit)
{
--
2.50.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/5] lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
2025-07-06 23:10 ` [PATCH 1/5] lib/crypto: arm/poly1305: Remove unneeded empty weak function Eric Biggers
@ 2025-07-06 23:10 ` Eric Biggers
2025-07-06 23:10 ` [PATCH 3/5] lib/crypto: arm64/poly1305: " Eric Biggers
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-06 23:10 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86, Eric Biggers, stable
Restore the SIMD usability check that was removed by commit 773426f4771b
("crypto: arm/poly1305 - Add block-only interface").
This safety check is cheap and is well worth eliminating a footgun.
While the Poly1305 functions *should* be called only where SIMD
registers are usable, if they are anyway, they should just do the right
thing instead of corrupting random tasks' registers and/or computing
incorrect MACs. Fixing this is also needed for poly1305_kunit to pass.
Just use may_use_simd() instead of the original crypto_simd_usable(),
since poly1305_kunit won't rely on crypto_simd_disabled_for_test.
Fixes: 773426f4771b ("crypto: arm/poly1305 - Add block-only interface")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
---
lib/crypto/arm/poly1305-glue.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/crypto/arm/poly1305-glue.c b/lib/crypto/arm/poly1305-glue.c
index 5b65b840c166..2d86c78af883 100644
--- a/lib/crypto/arm/poly1305-glue.c
+++ b/lib/crypto/arm/poly1305-glue.c
@@ -5,10 +5,11 @@
* Copyright (C) 2019 Linaro Ltd. <ard.biesheuvel@linaro.org>
*/
#include <asm/hwcap.h>
#include <asm/neon.h>
+#include <asm/simd.h>
#include <crypto/internal/poly1305.h>
#include <linux/cpufeature.h>
#include <linux/jump_label.h>
#include <linux/kernel.h>
#include <linux/module.h>
@@ -32,11 +33,11 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src,
unsigned int len, u32 padbit)
{
len = round_down(len, POLY1305_BLOCK_SIZE);
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) &&
- static_branch_likely(&have_neon)) {
+ static_branch_likely(&have_neon) && likely(may_use_simd())) {
do {
unsigned int todo = min_t(unsigned int, len, SZ_4K);
kernel_neon_begin();
poly1305_blocks_neon(state, src, todo, padbit);
--
2.50.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/5] lib/crypto: arm64/poly1305: Fix register corruption in no-SIMD contexts
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
2025-07-06 23:10 ` [PATCH 1/5] lib/crypto: arm/poly1305: Remove unneeded empty weak function Eric Biggers
2025-07-06 23:10 ` [PATCH 2/5] lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts Eric Biggers
@ 2025-07-06 23:10 ` Eric Biggers
2025-07-06 23:10 ` [PATCH 4/5] lib/crypto: x86/poly1305: " Eric Biggers
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-06 23:10 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86, Eric Biggers, stable
Restore the SIMD usability check that was removed by commit a59e5468a921
("crypto: arm64/poly1305 - Add block-only interface").
This safety check is cheap and is well worth eliminating a footgun.
While the Poly1305 functions *should* be called only where SIMD
registers are usable, if they are anyway, they should just do the right
thing instead of corrupting random tasks' registers and/or computing
incorrect MACs. Fixing this is also needed for poly1305_kunit to pass.
Just use may_use_simd() instead of the original crypto_simd_usable(),
since poly1305_kunit won't rely on crypto_simd_disabled_for_test.
Fixes: a59e5468a921 ("crypto: arm64/poly1305 - Add block-only interface")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
---
lib/crypto/arm64/poly1305-glue.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/crypto/arm64/poly1305-glue.c b/lib/crypto/arm64/poly1305-glue.c
index c9a74766785b..31aea21ce42f 100644
--- a/lib/crypto/arm64/poly1305-glue.c
+++ b/lib/crypto/arm64/poly1305-glue.c
@@ -5,10 +5,11 @@
* Copyright (C) 2019 Linaro Ltd. <ard.biesheuvel@linaro.org>
*/
#include <asm/hwcap.h>
#include <asm/neon.h>
+#include <asm/simd.h>
#include <crypto/internal/poly1305.h>
#include <linux/cpufeature.h>
#include <linux/jump_label.h>
#include <linux/kernel.h>
#include <linux/module.h>
@@ -31,11 +32,11 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *src,
unsigned int len, u32 padbit)
{
len = round_down(len, POLY1305_BLOCK_SIZE);
- if (static_branch_likely(&have_neon)) {
+ if (static_branch_likely(&have_neon) && likely(may_use_simd())) {
do {
unsigned int todo = min_t(unsigned int, len, SZ_4K);
kernel_neon_begin();
poly1305_blocks_neon(state, src, todo, padbit);
--
2.50.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/5] lib/crypto: x86/poly1305: Fix register corruption in no-SIMD contexts
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
` (2 preceding siblings ...)
2025-07-06 23:10 ` [PATCH 3/5] lib/crypto: arm64/poly1305: " Eric Biggers
@ 2025-07-06 23:10 ` Eric Biggers
2025-07-06 23:11 ` [PATCH 5/5] lib/crypto: x86/poly1305: Fix performance regression on short messages Eric Biggers
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-06 23:10 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86, Eric Biggers, stable
Restore the SIMD usability check and base conversion that were removed
by commit 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only
interface").
This safety check is cheap and is well worth eliminating a footgun.
While the Poly1305 functions *should* be called only where SIMD
registers are usable, if they are anyway, they should just do the right
thing instead of corrupting random tasks' registers and/or computing
incorrect MACs. Fixing this is also needed for poly1305_kunit to pass.
Just use irq_fpu_usable() instead of the original crypto_simd_usable(),
since poly1305_kunit won't rely on crypto_simd_disabled_for_test.
Fixes: 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only interface")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
---
lib/crypto/x86/poly1305_glue.c | 40 +++++++++++++++++++++++++++++++++-
1 file changed, 39 insertions(+), 1 deletion(-)
diff --git a/lib/crypto/x86/poly1305_glue.c b/lib/crypto/x86/poly1305_glue.c
index b7e78a583e07..968d84677631 100644
--- a/lib/crypto/x86/poly1305_glue.c
+++ b/lib/crypto/x86/poly1305_glue.c
@@ -23,10 +23,46 @@ struct poly1305_arch_internal {
u64 r[2];
u64 pad;
struct { u32 r2, r1, r4, r3; } rn[9];
};
+/*
+ * The AVX code uses base 2^26, while the scalar code uses base 2^64. If we hit
+ * the unfortunate situation of using AVX and then having to go back to scalar
+ * -- because the user is silly and has called the update function from two
+ * separate contexts -- then we need to convert back to the original base before
+ * proceeding. It is possible to reason that the initial reduction below is
+ * sufficient given the implementation invariants. However, for an avoidance of
+ * doubt and because this is not performance critical, we do the full reduction
+ * anyway. Z3 proof of below function: https://xn--4db.cc/ltPtHCKN/py
+ */
+static void convert_to_base2_64(void *ctx)
+{
+ struct poly1305_arch_internal *state = ctx;
+ u32 cy;
+
+ if (!state->is_base2_26)
+ return;
+
+ cy = state->h[0] >> 26; state->h[0] &= 0x3ffffff; state->h[1] += cy;
+ cy = state->h[1] >> 26; state->h[1] &= 0x3ffffff; state->h[2] += cy;
+ cy = state->h[2] >> 26; state->h[2] &= 0x3ffffff; state->h[3] += cy;
+ cy = state->h[3] >> 26; state->h[3] &= 0x3ffffff; state->h[4] += cy;
+ state->hs[0] = ((u64)state->h[2] << 52) | ((u64)state->h[1] << 26) | state->h[0];
+ state->hs[1] = ((u64)state->h[4] << 40) | ((u64)state->h[3] << 14) | (state->h[2] >> 12);
+ state->hs[2] = state->h[4] >> 24;
+ /* Unsigned Less Than: branchlessly produces 1 if a < b, else 0. */
+#define ULT(a, b) ((a ^ ((a ^ b) | ((a - b) ^ b))) >> (sizeof(a) * 8 - 1))
+ cy = (state->hs[2] >> 2) + (state->hs[2] & ~3ULL);
+ state->hs[2] &= 3;
+ state->hs[0] += cy;
+ state->hs[1] += (cy = ULT(state->hs[0], cy));
+ state->hs[2] += ULT(state->hs[1], cy);
+#undef ULT
+ state->is_base2_26 = 0;
+}
+
asmlinkage void poly1305_block_init_arch(
struct poly1305_block_state *state,
const u8 raw_key[POLY1305_BLOCK_SIZE]);
EXPORT_SYMBOL_GPL(poly1305_block_init_arch);
asmlinkage void poly1305_blocks_x86_64(struct poly1305_arch_internal *ctx,
@@ -60,11 +96,13 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *inp,
/* SIMD disables preemption, so relax after processing each page. */
BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
SZ_4K % POLY1305_BLOCK_SIZE);
- if (!static_branch_likely(&poly1305_use_avx)) {
+ if (!static_branch_likely(&poly1305_use_avx) ||
+ unlikely(!irq_fpu_usable())) {
+ convert_to_base2_64(ctx);
poly1305_blocks_x86_64(ctx, inp, len, padbit);
return;
}
do {
--
2.50.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 5/5] lib/crypto: x86/poly1305: Fix performance regression on short messages
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
` (3 preceding siblings ...)
2025-07-06 23:10 ` [PATCH 4/5] lib/crypto: x86/poly1305: " Eric Biggers
@ 2025-07-06 23:11 ` Eric Biggers
2025-07-08 3:22 ` [PATCH 0/5] lib/crypto: Poly1305 fixes Ard Biesheuvel
2025-07-09 19:16 ` Eric Biggers
6 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-06 23:11 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86, Eric Biggers, stable
Restore the len >= 288 condition on using the AVX implementation, which
was incidentally removed by commit 318c53ae02f2 ("crypto: x86/poly1305 -
Add block-only interface"). This check took into account the overhead
in key power computation, kernel-mode "FPU", and tail handling
associated with the AVX code. Indeed, restoring this check slightly
improves performance for len < 256 as measured using poly1305_kunit on
an "AMD Ryzen AI 9 365" (Zen 5) CPU:
Length Before After
====== ========== ==========
1 30 MB/s 36 MB/s
16 516 MB/s 598 MB/s
64 1700 MB/s 1882 MB/s
127 2265 MB/s 2651 MB/s
128 2457 MB/s 2827 MB/s
200 2702 MB/s 3238 MB/s
256 3841 MB/s 3768 MB/s
511 4580 MB/s 4585 MB/s
512 5430 MB/s 5398 MB/s
1024 7268 MB/s 7305 MB/s
3173 8999 MB/s 8948 MB/s
4096 9942 MB/s 9921 MB/s
16384 10557 MB/s 10545 MB/s
While the optimal threshold for this CPU might be slightly lower than
288 (see the len == 256 case), other CPUs would need to be tested too,
and these sorts of benchmarks can underestimate the true cost of
kernel-mode "FPU". Therefore, for now just restore the 288 threshold.
Fixes: 318c53ae02f2 ("crypto: x86/poly1305 - Add block-only interface")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
---
lib/crypto/x86/poly1305_glue.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/crypto/x86/poly1305_glue.c b/lib/crypto/x86/poly1305_glue.c
index 968d84677631..856d48fd422b 100644
--- a/lib/crypto/x86/poly1305_glue.c
+++ b/lib/crypto/x86/poly1305_glue.c
@@ -96,11 +96,19 @@ void poly1305_blocks_arch(struct poly1305_block_state *state, const u8 *inp,
/* SIMD disables preemption, so relax after processing each page. */
BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
SZ_4K % POLY1305_BLOCK_SIZE);
+ /*
+ * The AVX implementations have significant setup overhead (e.g. key
+ * power computation, kernel FPU enabling) which makes them slower for
+ * short messages. Fall back to the scalar implementation for messages
+ * shorter than 288 bytes, unless the AVX-specific key setup has already
+ * been performed (indicated by ctx->is_base2_26).
+ */
if (!static_branch_likely(&poly1305_use_avx) ||
+ (len < POLY1305_BLOCK_SIZE * 18 && !ctx->is_base2_26) ||
unlikely(!irq_fpu_usable())) {
convert_to_base2_64(ctx);
poly1305_blocks_x86_64(ctx, inp, len, padbit);
return;
}
--
2.50.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/5] lib/crypto: Poly1305 fixes
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
` (4 preceding siblings ...)
2025-07-06 23:11 ` [PATCH 5/5] lib/crypto: x86/poly1305: Fix performance regression on short messages Eric Biggers
@ 2025-07-08 3:22 ` Ard Biesheuvel
2025-07-09 19:16 ` Eric Biggers
6 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2025-07-08 3:22 UTC (permalink / raw)
To: Eric Biggers
Cc: linux-crypto, linux-kernel, Jason A . Donenfeld, linux-arm-kernel,
x86
On Mon, 7 Jul 2025 at 09:11, Eric Biggers <ebiggers@kernel.org> wrote:
>
> This series is also available at:
>
> git fetch https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git poly1305-fixes
>
> This series fixes the arm, arm64, and x86 Poly1305 functions to not
> corrupt random tasks' registers when called in the "wrong" context. It
> also fixes a performance regression on x86 with short messages.
>
> This series is needed for my upcoming poly1305_kunit test to pass.
>
> Eric Biggers (5):
> lib/crypto: arm/poly1305: Remove unneeded empty weak function
> lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts
> lib/crypto: arm64/poly1305: Fix register corruption in no-SIMD
> contexts
> lib/crypto: x86/poly1305: Fix register corruption in no-SIMD contexts
> lib/crypto: x86/poly1305: Fix performance regression on short messages
>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
> lib/crypto/arm/poly1305-glue.c | 8 ++----
> lib/crypto/arm64/poly1305-glue.c | 3 +-
> lib/crypto/x86/poly1305_glue.c | 48 +++++++++++++++++++++++++++++++-
> 3 files changed, 51 insertions(+), 8 deletions(-)
>
>
> base-commit: f1da28dfadd26ef95bbd0b1ddf066e7ffe1505ff
> --
> 2.50.0
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 0/5] lib/crypto: Poly1305 fixes
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
` (5 preceding siblings ...)
2025-07-08 3:22 ` [PATCH 0/5] lib/crypto: Poly1305 fixes Ard Biesheuvel
@ 2025-07-09 19:16 ` Eric Biggers
6 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-07-09 19:16 UTC (permalink / raw)
To: linux-crypto
Cc: linux-kernel, Ard Biesheuvel, Jason A . Donenfeld,
linux-arm-kernel, x86
On Sun, Jul 06, 2025 at 04:10:55PM -0700, Eric Biggers wrote:
> This series is also available at:
>
> git fetch https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git poly1305-fixes
>
> This series fixes the arm, arm64, and x86 Poly1305 functions to not
> corrupt random tasks' registers when called in the "wrong" context. It
> also fixes a performance regression on x86 with short messages.
>
> This series is needed for my upcoming poly1305_kunit test to pass.
>
> Eric Biggers (5):
> lib/crypto: arm/poly1305: Remove unneeded empty weak function
> lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts
> lib/crypto: arm64/poly1305: Fix register corruption in no-SIMD
> contexts
> lib/crypto: x86/poly1305: Fix register corruption in no-SIMD contexts
> lib/crypto: x86/poly1305: Fix performance regression on short messages
>
> lib/crypto/arm/poly1305-glue.c | 8 ++----
> lib/crypto/arm64/poly1305-glue.c | 3 +-
> lib/crypto/x86/poly1305_glue.c | 48 +++++++++++++++++++++++++++++++-
> 3 files changed, 51 insertions(+), 8 deletions(-)
>
>
> base-commit: f1da28dfadd26ef95bbd0b1ddf066e7ffe1505ff
Applied to https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=libcrypto-next
- Eric
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-07-09 22:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-06 23:10 [PATCH 0/5] lib/crypto: Poly1305 fixes Eric Biggers
2025-07-06 23:10 ` [PATCH 1/5] lib/crypto: arm/poly1305: Remove unneeded empty weak function Eric Biggers
2025-07-06 23:10 ` [PATCH 2/5] lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts Eric Biggers
2025-07-06 23:10 ` [PATCH 3/5] lib/crypto: arm64/poly1305: " Eric Biggers
2025-07-06 23:10 ` [PATCH 4/5] lib/crypto: x86/poly1305: " Eric Biggers
2025-07-06 23:11 ` [PATCH 5/5] lib/crypto: x86/poly1305: Fix performance regression on short messages Eric Biggers
2025-07-08 3:22 ` [PATCH 0/5] lib/crypto: Poly1305 fixes Ard Biesheuvel
2025-07-09 19:16 ` Eric Biggers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).