* [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
@ 2025-03-04 21:32 Eric Biggers
2025-03-05 10:10 ` Uros Bizjak
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Eric Biggers @ 2025-03-04 21:32 UTC (permalink / raw)
To: linux-kernel
Cc: Bill Wendling, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H . Peter Anvin, Ard Biesheuvel,
Nathan Chancellor, Nick Desaulniers, Justin Stitt, David Laight,
linux-crypto, llvm
From: Eric Biggers <ebiggers@google.com>
For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
taking advantage of wider CRC instructions. Note that crc32c-3way.S
already uses this same optimization too.
crc_kunit shows an improvement of about 25% for len=127.
Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
This applies to
https://web.git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=crc-next
arch/x86/lib/crc32-glue.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/lib/crc32-glue.c b/arch/x86/lib/crc32-glue.c
index 4b4721176799a..e3f93b17ac3f1 100644
--- a/arch/x86/lib/crc32-glue.c
+++ b/arch/x86/lib/crc32-glue.c
@@ -55,11 +55,19 @@ u32 crc32c_arch(u32 crc, const u8 *p, size_t len)
for (num_longs = len / sizeof(unsigned long);
num_longs != 0; num_longs--, p += sizeof(unsigned long))
asm(CRC32_INST : "+r" (crc) : ASM_INPUT_RM (*(unsigned long *)p));
- for (len %= sizeof(unsigned long); len; len--, p++)
+ if (sizeof(unsigned long) > 4 && (len & 4)) {
+ asm("crc32l %1, %0" : "+r" (crc) : ASM_INPUT_RM (*(u32 *)p));
+ p += 4;
+ }
+ if (len & 2) {
+ asm("crc32w %1, %0" : "+r" (crc) : ASM_INPUT_RM (*(u16 *)p));
+ p += 2;
+ }
+ if (len & 1)
asm("crc32b %1, %0" : "+r" (crc) : ASM_INPUT_RM (*p));
return crc;
}
EXPORT_SYMBOL(crc32c_arch);
base-commit: 13f3d13d88b5dcba104a204fcbee61c75f8407d0
--
2.48.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-04 21:32 [PATCH] x86/crc32: optimize tail handling for crc32c short inputs Eric Biggers
@ 2025-03-05 10:10 ` Uros Bizjak
2025-03-05 14:26 ` David Laight
2025-03-06 17:21 ` Eric Biggers
2 siblings, 0 replies; 8+ messages in thread
From: Uros Bizjak @ 2025-03-05 10:10 UTC (permalink / raw)
To: Eric Biggers
Cc: linux-kernel, Bill Wendling, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin,
Ard Biesheuvel, Nathan Chancellor, Nick Desaulniers, Justin Stitt,
David Laight, linux-crypto, llvm
On Wed, Mar 5, 2025 at 10:26 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> From: Eric Biggers <ebiggers@google.com>
>
> For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> taking advantage of wider CRC instructions. Note that crc32c-3way.S
> already uses this same optimization too.
>
> crc_kunit shows an improvement of about 25% for len=127.
>
> Suggested-by: H. Peter Anvin <hpa@zytor.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
LGTM.
Acked-by: Uros Bizjak <ubizjak@gmail.com>
Thanks,
Uros.
> ---
>
> This applies to
> https://web.git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=crc-next
>
> arch/x86/lib/crc32-glue.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/lib/crc32-glue.c b/arch/x86/lib/crc32-glue.c
> index 4b4721176799a..e3f93b17ac3f1 100644
> --- a/arch/x86/lib/crc32-glue.c
> +++ b/arch/x86/lib/crc32-glue.c
> @@ -55,11 +55,19 @@ u32 crc32c_arch(u32 crc, const u8 *p, size_t len)
>
> for (num_longs = len / sizeof(unsigned long);
> num_longs != 0; num_longs--, p += sizeof(unsigned long))
> asm(CRC32_INST : "+r" (crc) : ASM_INPUT_RM (*(unsigned long *)p));
>
> - for (len %= sizeof(unsigned long); len; len--, p++)
> + if (sizeof(unsigned long) > 4 && (len & 4)) {
> + asm("crc32l %1, %0" : "+r" (crc) : ASM_INPUT_RM (*(u32 *)p));
> + p += 4;
> + }
> + if (len & 2) {
> + asm("crc32w %1, %0" : "+r" (crc) : ASM_INPUT_RM (*(u16 *)p));
> + p += 2;
> + }
> + if (len & 1)
> asm("crc32b %1, %0" : "+r" (crc) : ASM_INPUT_RM (*p));
>
> return crc;
> }
> EXPORT_SYMBOL(crc32c_arch);
>
> base-commit: 13f3d13d88b5dcba104a204fcbee61c75f8407d0
> --
> 2.48.1
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-04 21:32 [PATCH] x86/crc32: optimize tail handling for crc32c short inputs Eric Biggers
2025-03-05 10:10 ` Uros Bizjak
@ 2025-03-05 14:26 ` David Laight
2025-03-05 19:16 ` Eric Biggers
2025-03-06 17:21 ` Eric Biggers
2 siblings, 1 reply; 8+ messages in thread
From: David Laight @ 2025-03-05 14:26 UTC (permalink / raw)
To: Eric Biggers
Cc: linux-kernel, Bill Wendling, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin,
Ard Biesheuvel, Nathan Chancellor, Nick Desaulniers, Justin Stitt,
linux-crypto, llvm
On Tue, 4 Mar 2025 13:32:16 -0800
Eric Biggers <ebiggers@kernel.org> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> taking advantage of wider CRC instructions. Note that crc32c-3way.S
> already uses this same optimization too.
An alternative is to add extra zero bytes at the start of the buffer.
They don't affect the crc and just need the first 8 bytes shifted left.
I think any non-zero 'crc-in' just needs to be xor'ed over the first
4 actual data bytes.
(It's over 40 years since I did the maths of CRC.)
You won't notice the misaligned accesses all down the buffer.
When I was testing different ipcsum code misaligned buffers
cost less than 1 clock per cache line.
I think that was even true for the versions that managed 12 bytes
per clock (including the one Linus committed).
David
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-05 14:26 ` David Laight
@ 2025-03-05 19:16 ` Eric Biggers
2025-03-05 22:07 ` David Laight
0 siblings, 1 reply; 8+ messages in thread
From: Eric Biggers @ 2025-03-05 19:16 UTC (permalink / raw)
To: David Laight
Cc: linux-kernel, Bill Wendling, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin,
Ard Biesheuvel, Nathan Chancellor, Nick Desaulniers, Justin Stitt,
linux-crypto, llvm
On Wed, Mar 05, 2025 at 02:26:53PM +0000, David Laight wrote:
> On Tue, 4 Mar 2025 13:32:16 -0800
> Eric Biggers <ebiggers@kernel.org> wrote:
>
> > From: Eric Biggers <ebiggers@google.com>
> >
> > For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> > do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> > taking advantage of wider CRC instructions. Note that crc32c-3way.S
> > already uses this same optimization too.
>
> An alternative is to add extra zero bytes at the start of the buffer.
> They don't affect the crc and just need the first 8 bytes shifted left.
>
> I think any non-zero 'crc-in' just needs to be xor'ed over the first
> 4 actual data bytes.
> (It's over 40 years since I did the maths of CRC.)
>
> You won't notice the misaligned accesses all down the buffer.
> When I was testing different ipcsum code misaligned buffers
> cost less than 1 clock per cache line.
> I think that was even true for the versions that managed 12 bytes
> per clock (including the one Linus committed).
>
> David
Sure, but that only works when len >= sizeof(unsigned long). Also, the initial
CRC sometimes has to be divided between two unsigned longs.
The following implements this, and you can play around with it a bit if you
want. There may be a way to optimize it a bit more.
But I think you'll find it's a bit more complex than you thought.
I think I'd like to stay with the shorter and simpler 4-2-1 step-down.
u32 crc32c_arch(u32 crc, const u8 *p, size_t len)
{
if (!static_branch_likely(&have_crc32))
return crc32c_base(crc, p, len);
if (IS_ENABLED(CONFIG_X86_64) && len >= CRC32C_PCLMUL_BREAKEVEN &&
static_branch_likely(&have_pclmulqdq) && crypto_simd_usable()) {
kernel_fpu_begin();
crc = crc32c_x86_3way(crc, p, len);
kernel_fpu_end();
return crc;
}
if (len % sizeof(unsigned long) != 0) {
unsigned long msgpoly;
u32 orig_crc = crc;
if (len < sizeof(unsigned long)) {
if (sizeof(unsigned long) > 4 && (len & 4)) {
asm("crc32l %1, %0"
: "+r" (crc) : ASM_INPUT_RM (*(u32 *)p));
p += 4;
}
if (len & 2) {
asm("crc32w %1, %0"
: "+r" (crc) : ASM_INPUT_RM (*(u16 *)p));
p += 2;
}
if (len & 1)
asm("crc32b %1, %0"
: "+r" (crc) : ASM_INPUT_RM (*p));
return crc;
}
msgpoly = (get_unaligned((unsigned long *)p) ^ orig_crc) <<
(8 * (-len % sizeof(unsigned long)));
p += len % sizeof(unsigned long);
crc = 0;
asm(CRC32_INST : "+r" (crc) : "r" (msgpoly));
msgpoly = get_unaligned((unsigned long *)p) ^
(orig_crc >> (8 * (len % sizeof(unsigned long))));
p += sizeof(unsigned long);
len -= (len % sizeof(unsigned long)) + sizeof(unsigned long);
asm(CRC32_INST : "+r" (crc) : "r" (msgpoly));
}
for (len /= sizeof(unsigned long); len != 0;
len--, p += sizeof(unsigned long))
asm(CRC32_INST : "+r" (crc) : ASM_INPUT_RM (*(unsigned long *)p));
return crc;
}
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-05 19:16 ` Eric Biggers
@ 2025-03-05 22:07 ` David Laight
2025-03-06 2:56 ` Eric Biggers
0 siblings, 1 reply; 8+ messages in thread
From: David Laight @ 2025-03-05 22:07 UTC (permalink / raw)
To: Eric Biggers
Cc: linux-kernel, Bill Wendling, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin,
Ard Biesheuvel, Nathan Chancellor, Nick Desaulniers, Justin Stitt,
linux-crypto, llvm
On Wed, 5 Mar 2025 11:16:08 -0800
Eric Biggers <ebiggers@kernel.org> wrote:
> On Wed, Mar 05, 2025 at 02:26:53PM +0000, David Laight wrote:
> > On Tue, 4 Mar 2025 13:32:16 -0800
> > Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > > From: Eric Biggers <ebiggers@google.com>
> > >
> > > For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> > > do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> > > taking advantage of wider CRC instructions. Note that crc32c-3way.S
> > > already uses this same optimization too.
> >
> > An alternative is to add extra zero bytes at the start of the buffer.
> > They don't affect the crc and just need the first 8 bytes shifted left.
> >
> > I think any non-zero 'crc-in' just needs to be xor'ed over the first
> > 4 actual data bytes.
> > (It's over 40 years since I did the maths of CRC.)
...
> > David
>
> Sure, but that only works when len >= sizeof(unsigned long). Also, the initial
> CRC sometimes has to be divided between two unsigned longs.
Yes, I was thinking that might make it a bit more tricky.
I need to find some spare time :-)
I wasn't taught anything about using non-carry multiplies either.
And I can't remember the relevant 'number field' stuff either.
But (with no-carry maths) I think you have:
crc(n + 1) = (crc(n) + data(n)) * poly
If data(n+1) and data(n+2) are zero (handled elsewhere) you have:
crc(n + 3) = (((crc(n) + data(n)) * poly) * poly) * poly
I think that because it is a field this is the same as
crc(n + 3) = (crc(n) + data(n)) * (poly * poly * poly)
which is just a different crc polynomial.
If true your '3-way' cpu doesn't have to use big blocks.
I guess I'm missing something.
David
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-05 22:07 ` David Laight
@ 2025-03-06 2:56 ` Eric Biggers
2025-03-06 5:17 ` David Laight
0 siblings, 1 reply; 8+ messages in thread
From: Eric Biggers @ 2025-03-06 2:56 UTC (permalink / raw)
To: David Laight
Cc: linux-kernel, Bill Wendling, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin,
Ard Biesheuvel, Nathan Chancellor, Nick Desaulniers, Justin Stitt,
linux-crypto, llvm
On Wed, Mar 05, 2025 at 10:07:39PM +0000, David Laight wrote:
> On Wed, 5 Mar 2025 11:16:08 -0800
> Eric Biggers <ebiggers@kernel.org> wrote:
>
> > On Wed, Mar 05, 2025 at 02:26:53PM +0000, David Laight wrote:
> > > On Tue, 4 Mar 2025 13:32:16 -0800
> > > Eric Biggers <ebiggers@kernel.org> wrote:
> > >
> > > > From: Eric Biggers <ebiggers@google.com>
> > > >
> > > > For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> > > > do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> > > > taking advantage of wider CRC instructions. Note that crc32c-3way.S
> > > > already uses this same optimization too.
> > >
> > > An alternative is to add extra zero bytes at the start of the buffer.
> > > They don't affect the crc and just need the first 8 bytes shifted left.
> > >
> > > I think any non-zero 'crc-in' just needs to be xor'ed over the first
> > > 4 actual data bytes.
> > > (It's over 40 years since I did the maths of CRC.)
> ...
> > > David
> >
> > Sure, but that only works when len >= sizeof(unsigned long). Also, the initial
> > CRC sometimes has to be divided between two unsigned longs.
>
> Yes, I was thinking that might make it a bit more tricky.
> I need to find some spare time :-)
>
> I wasn't taught anything about using non-carry multiplies either.
> And I can't remember the relevant 'number field' stuff either.
> But (with no-carry maths) I think you have:
> crc(n + 1) = (crc(n) + data(n)) * poly
> If data(n+1) and data(n+2) are zero (handled elsewhere) you have:
> crc(n + 3) = (((crc(n) + data(n)) * poly) * poly) * poly
> I think that because it is a field this is the same as
> crc(n + 3) = (crc(n) + data(n)) * (poly * poly * poly)
> which is just a different crc polynomial.
> If true your '3-way' cpu doesn't have to use big blocks.
Well, to extend by some constant number of bits 'n', you can carryless-multiply
by the polynomial x^n, pre-reduced by the CRC's generator polynomial. That's
basically how all the CRC implementations using carryless multiplication work.
Take a look at the x86 and riscv optimized code, for example -- especially my
new versions in the crc-next tree at
https://web.git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=crc-next.
But x86 does not have a scalar carryless multiplication instruction, only vector
(PCLMULQDQ). It does have a scalar CRC instruction, for crc32c *specifically*,
and that is what the code we're discussing is taking advantage of. Given that
there is overhead associated with using kernel-mode FPU (i.e. vector), it makes
sense to do that, at least on short messages.
On longer messages a PCLMULQDQ-only implementation would work well, but so does
interleaving the crc32c scalar instruction on multiple chunks, which is what is
currently wired up in the kernel via crc32c-3way.S. And yes, the chunks for
that do not *have* to be long, but you still need to use pclmulqdq instructions
to combine them (unless you do a really slow bit-at-a-time carryless
multiplication), and you have to enter a kernel-mode FPU section to do that.
- Eric
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-06 2:56 ` Eric Biggers
@ 2025-03-06 5:17 ` David Laight
0 siblings, 0 replies; 8+ messages in thread
From: David Laight @ 2025-03-06 5:17 UTC (permalink / raw)
To: Eric Biggers
Cc: linux-kernel, Bill Wendling, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin,
Ard Biesheuvel, Nathan Chancellor, Nick Desaulniers, Justin Stitt,
linux-crypto, llvm
On Wed, 5 Mar 2025 18:56:43 -0800
Eric Biggers <ebiggers@kernel.org> wrote:
> On Wed, Mar 05, 2025 at 10:07:39PM +0000, David Laight wrote:
> > On Wed, 5 Mar 2025 11:16:08 -0800
> > Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > > On Wed, Mar 05, 2025 at 02:26:53PM +0000, David Laight wrote:
> > > > On Tue, 4 Mar 2025 13:32:16 -0800
> > > > Eric Biggers <ebiggers@kernel.org> wrote:
> > > >
> > > > > From: Eric Biggers <ebiggers@google.com>
> > > > >
> > > > > For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> > > > > do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> > > > > taking advantage of wider CRC instructions. Note that crc32c-3way.S
> > > > > already uses this same optimization too.
> > > >
> > > > An alternative is to add extra zero bytes at the start of the buffer.
> > > > They don't affect the crc and just need the first 8 bytes shifted left.
> > > >
> > > > I think any non-zero 'crc-in' just needs to be xor'ed over the first
> > > > 4 actual data bytes.
> > > > (It's over 40 years since I did the maths of CRC.)
> > ...
> > > > David
> > >
> > > Sure, but that only works when len >= sizeof(unsigned long). Also, the initial
> > > CRC sometimes has to be divided between two unsigned longs.
> >
> > Yes, I was thinking that might make it a bit more tricky.
> > I need to find some spare time :-)
> >
> > I wasn't taught anything about using non-carry multiplies either.
> > And I can't remember the relevant 'number field' stuff either.
> > But (with no-carry maths) I think you have:
> > crc(n + 1) = (crc(n) + data(n)) * poly
> > If data(n+1) and data(n+2) are zero (handled elsewhere) you have:
> > crc(n + 3) = (((crc(n) + data(n)) * poly) * poly) * poly
> > I think that because it is a field this is the same as
> > crc(n + 3) = (crc(n) + data(n)) * (poly * poly * poly)
> > which is just a different crc polynomial.
> > If true your '3-way' cpu doesn't have to use big blocks.
>
> Well, to extend by some constant number of bits 'n', you can carryless-multiply
> by the polynomial x^n, pre-reduced by the CRC's generator polynomial. That's
> basically how all the CRC implementations using carryless multiplication work.
> Take a look at the x86 and riscv optimized code, for example -- especially my
> new versions in the crc-next tree at
> https://web.git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=crc-next.
>
> But x86 does not have a scalar carryless multiplication instruction, only vector
> (PCLMULQDQ). It does have a scalar CRC instruction, for crc32c *specifically*,
> and that is what the code we're discussing is taking advantage of. Given that
> there is overhead associated with using kernel-mode FPU (i.e. vector), it makes
> sense to do that, at least on short messages.
>
> On longer messages a PCLMULQDQ-only implementation would work well, but so does
> interleaving the crc32c scalar instruction on multiple chunks, which is what is
> currently wired up in the kernel via crc32c-3way.S. And yes, the chunks for
> that do not *have* to be long, but you still need to use pclmulqdq instructions
> to combine them
You should be able to use lookup tables to jump over the other chunks
(so processing a block of zero bytes).
Possibly 8 lookup tables of 16 entries each for each nibble (512 bytes).
Not nice for the d-cache though.
> (unless you do a really slow bit-at-a-time carryless
> multiplication), and you have to enter a kernel-mode FPU section to do that.
That pseudo-code is probably completely wrong.
I suspect that since it is 'just' doing 'data % poly' it relies on the fact that
327 % 7 == 3 * (100 % 7) + 27
and reduces the length by one digit.
(Where a 'digit' could be 64 bits)
I need to look up pclmulqdq.
Trying to do anything with the simd instructions makes my head hurt.
David
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
2025-03-04 21:32 [PATCH] x86/crc32: optimize tail handling for crc32c short inputs Eric Biggers
2025-03-05 10:10 ` Uros Bizjak
2025-03-05 14:26 ` David Laight
@ 2025-03-06 17:21 ` Eric Biggers
2 siblings, 0 replies; 8+ messages in thread
From: Eric Biggers @ 2025-03-06 17:21 UTC (permalink / raw)
To: linux-kernel
Cc: Bill Wendling, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, x86, H . Peter Anvin, Ard Biesheuvel,
Nathan Chancellor, Nick Desaulniers, Justin Stitt, David Laight,
linux-crypto, llvm
On Tue, Mar 04, 2025 at 01:32:16PM -0800, Eric Biggers wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> taking advantage of wider CRC instructions. Note that crc32c-3way.S
> already uses this same optimization too.
>
> crc_kunit shows an improvement of about 25% for len=127.
>
> Suggested-by: H. Peter Anvin <hpa@zytor.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---
>
> This applies to
> https://web.git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=crc-next
Applied to
https://web.git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=crc-next
- Eric
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-03-06 17:21 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-04 21:32 [PATCH] x86/crc32: optimize tail handling for crc32c short inputs Eric Biggers
2025-03-05 10:10 ` Uros Bizjak
2025-03-05 14:26 ` David Laight
2025-03-05 19:16 ` Eric Biggers
2025-03-05 22:07 ` David Laight
2025-03-06 2:56 ` Eric Biggers
2025-03-06 5:17 ` David Laight
2025-03-06 17:21 ` Eric Biggers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox