* Re: [PATCH v9 1/2] lib: checksum: Fix type casting in checksum kunits
[not found] ` <20240221-fix_sparse_errors_checksum_tests-v9-1-bff4d73ab9d1@rivosinc.com>
@ 2024-02-23 9:13 ` Christophe Leroy
0 siblings, 0 replies; 5+ messages in thread
From: Christophe Leroy @ 2024-02-23 9:13 UTC (permalink / raw)
To: Charlie Jenkins, Guenter Roeck, David Laight, Palmer Dabbelt,
Andrew Morton, Helge Deller, James E.J. Bottomley, Parisc List,
Al Viro
Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-kernel@vger.kernel.org, kernel test robot
Le 22/02/2024 à 03:55, Charlie Jenkins a écrit :
> The checksum functions use the types __wsum and __sum16. These need to
> be explicitly casted to, because will cause sparse errors otherwise.
This is not the correct fix. When you forcibly cast, you shut up sparse
and hide warnings but you don't fix the problem on big endian
architectures which is what sparse reports to you.
In order to fix both the sparse warnings and the related issues, you
have to perform proper endian conversion, similar to what was done with
commit b38460bc463c ("kunit: Fix checksum tests on big endian CPUs")
The following change is what your patch should do:
diff --git a/lib/checksum_kunit.c b/lib/checksum_kunit.c
index 225bb7701460..bf70850035c7 100644
--- a/lib/checksum_kunit.c
+++ b/lib/checksum_kunit.c
@@ -215,7 +215,7 @@ static const u32 init_sums_no_overflow[] = {
0xffff0000, 0xfffffffb,
};
-static const __sum16 expected_csum_ipv6_magic[] = {
+static const u16 expected_csum_ipv6_magic[] = {
0x18d4, 0x3085, 0x2e4b, 0xd9f4, 0xbdc8, 0x78f, 0x1034, 0x8422, 0x6fc0,
0xd2f6, 0xbeb5, 0x9d3, 0x7e2a, 0x312e, 0x778e, 0xc1bb, 0x7cf2, 0x9d1e,
0xca21, 0xf3ff, 0x7569, 0xb02e, 0xca86, 0x7e76, 0x4539, 0x45e3, 0xf28d,
@@ -241,7 +241,7 @@ static const __sum16 expected_csum_ipv6_magic[] = {
0x3845, 0x1014
};
-static const __sum16 expected_fast_csum[] = {
+static const u16 expected_fast_csum[] = {
0xda83, 0x45da, 0x4f46, 0x4e4f, 0x34e, 0xe902, 0xa5e9, 0x87a5, 0x7187,
0x5671, 0xf556, 0x6df5, 0x816d, 0x8f81, 0xbb8f, 0xfbba, 0x5afb, 0xbe5a,
0xedbe, 0xabee, 0x6aac, 0xe6b, 0xea0d, 0x67ea, 0x7e68, 0x8a7e, 0x6f8a,
@@ -577,7 +577,8 @@ static void test_csum_no_carry_inputs(struct kunit
*test)
static void test_ip_fast_csum(struct kunit *test)
{
- __sum16 csum_result, expected;
+ __sum16 csum_result;
+ u16 expected;
for (int len = IPv4_MIN_WORDS; len < IPv4_MAX_WORDS; len++) {
for (int index = 0; index < NUM_IP_FAST_CSUM_TESTS; index++) {
@@ -586,7 +587,7 @@ static void test_ip_fast_csum(struct kunit *test)
expected_fast_csum[(len - IPv4_MIN_WORDS) *
NUM_IP_FAST_CSUM_TESTS +
index];
- CHECK_EQ(expected, csum_result);
+ CHECK_EQ(to_sum16(expected), csum_result);
}
}
}
@@ -598,7 +599,7 @@ static void test_csum_ipv6_magic(struct kunit *test)
const struct in6_addr *daddr;
unsigned int len;
unsigned char proto;
- unsigned int csum;
+ __wsum csum;
const int daddr_offset = sizeof(struct in6_addr);
const int len_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr);
@@ -611,10 +612,10 @@ static void test_csum_ipv6_magic(struct kunit *test)
saddr = (const struct in6_addr *)(random_buf + i);
daddr = (const struct in6_addr *)(random_buf + i +
daddr_offset);
- len = *(unsigned int *)(random_buf + i + len_offset);
+ len = le32_to_cpu(*(__le32 *)(random_buf + i + len_offset));
proto = *(random_buf + i + proto_offset);
- csum = *(unsigned int *)(random_buf + i + csum_offset);
- CHECK_EQ(expected_csum_ipv6_magic[i],
+ csum = *(__wsum *)(random_buf + i + csum_offset);
+ CHECK_EQ(to_sum16(expected_csum_ipv6_magic[i]),
csum_ipv6_magic(saddr, daddr, len, proto, csum));
}
#endif /* !CONFIG_NET */
---
Christophe
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests
[not found] ` <20240221-fix_sparse_errors_checksum_tests-v9-2-bff4d73ab9d1@rivosinc.com>
@ 2024-02-23 10:06 ` Christophe Leroy
2024-02-23 10:28 ` David Laight
2024-02-23 17:54 ` Charlie Jenkins
0 siblings, 2 replies; 5+ messages in thread
From: Christophe Leroy @ 2024-02-23 10:06 UTC (permalink / raw)
To: Charlie Jenkins, Guenter Roeck, David Laight, Palmer Dabbelt,
Andrew Morton, Helge Deller, James E.J. Bottomley, Parisc List,
Al Viro
Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-kernel@vger.kernel.org
Le 22/02/2024 à 03:55, Charlie Jenkins a écrit :
> The test cases for ip_fast_csum and csum_ipv6_magic were failing on a
> variety of architectures that are big endian or do not support
> misalgined accesses. Both of these test cases are changed to support big
> and little endian architectures.
It is unclear. The endianess issue and the alignment issue are two
independant subjects that should be handled in separate patches.
According to the subject of this patch, only misaligned accesses should
be handled here. Endianness should have been fixed by patch 1.
Also, would be nice to give exemple of architecture that has such
problem, and explain what is the problem exactly.
>
> The test for ip_fast_csum is changed to align the data along (14 +
> NET_IP_ALIGN) bytes which is the alignment of an IP header. The test for
> csum_ipv6_magic aligns the data using a struct. An extra padding field
> is added to the struct to ensure that the size of the struct is the same
> on all architectures (44 bytes).
What is the purpose of that padding ? You take fields one by one and
never do anything with the full struct.
>
> Fixes: 6f4c45cbcb00 ("kunit: Add tests for csum_ipv6_magic and ip_fast_csum")
> Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
> Reviewed-by: Guenter Roeck <linux@roeck-us.net>
> Tested-by: Guenter Roeck <linux@roeck-us.net>
> ---
> lib/checksum_kunit.c | 393 ++++++++++++++++++---------------------------------
> 1 file changed, 134 insertions(+), 259 deletions(-)
>
> diff --git a/lib/checksum_kunit.c b/lib/checksum_kunit.c
> index 776ad3d6d5a1..f1b18e3628dd 100644
> --- a/lib/checksum_kunit.c
> +++ b/lib/checksum_kunit.c
> @@ -13,8 +13,9 @@
>
> #define IPv4_MIN_WORDS 5
> #define IPv4_MAX_WORDS 15
> -#define NUM_IPv6_TESTS 200
> -#define NUM_IP_FAST_CSUM_TESTS 181
> +#define WORD_ALIGNMENT 4
Is that macro really needed ? Can't you just use sizeof(u32) or
something similar ?
> +/* Ethernet headers are 14 bytes and NET_IP_ALIGN is used to align them */
> +#define IP_ALIGNMENT (14 + NET_IP_ALIGN)
Only if no VLAN.
When using VLANs it is 4 bytes more. But why do you mind that at all ?
>
> /* Values for a little endian CPU. Byte swap each half on big endian CPU. */
> static const u32 random_init_sum = 0x2847aab;
...
> @@ -578,15 +451,19 @@ static void test_csum_no_carry_inputs(struct kunit *test)
> static void test_ip_fast_csum(struct kunit *test)
> {
> __sum16 csum_result, expected;
> -
> - for (int len = IPv4_MIN_WORDS; len < IPv4_MAX_WORDS; len++) {
> - for (int index = 0; index < NUM_IP_FAST_CSUM_TESTS; index++) {
> - csum_result = ip_fast_csum(random_buf + index, len);
> - expected = (__force __sum16)
> - expected_fast_csum[(len - IPv4_MIN_WORDS) *
> - NUM_IP_FAST_CSUM_TESTS +
> - index];
> - CHECK_EQ(expected, csum_result);
> + int num_tests = (MAX_LEN / WORD_ALIGNMENT - IPv4_MAX_WORDS * WORD_ALIGNMENT);
> +
> + for (int i = 0; i < num_tests; i++) {
> + memcpy(&tmp_buf[IP_ALIGNMENT],
> + random_buf + (i * WORD_ALIGNMENT),
> + IPv4_MAX_WORDS * WORD_ALIGNMENT);
That looks weird.
> +
> + for (int len = IPv4_MIN_WORDS; len <= IPv4_MAX_WORDS; len++) {
> + int index = (len - IPv4_MIN_WORDS) +
> + i * ((IPv4_MAX_WORDS - IPv4_MIN_WORDS) + 1);
Missing blank line after declaration.
> + csum_result = ip_fast_csum(tmp_buf + IP_ALIGNMENT, len);
> + expected = (__force __sum16)htons(expected_fast_csum[index]);
You must do proper type conversion using to_sum16(), not a forced cast.
> + CHECK_EQ(csum_result, expected);
> }
> }
> }
> @@ -594,29 +471,27 @@ static void test_ip_fast_csum(struct kunit *test)
> static void test_csum_ipv6_magic(struct kunit *test)
> {
> #if defined(CONFIG_NET)
> - const struct in6_addr *saddr;
> - const struct in6_addr *daddr;
> - unsigned int len;
> - unsigned char proto;
> - unsigned int csum;
> -
> - const int daddr_offset = sizeof(struct in6_addr);
> - const int len_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr);
> - const int proto_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr) +
> - sizeof(int);
> - const int csum_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr) +
> - sizeof(int) + sizeof(char);
> -
> - for (int i = 0; i < NUM_IPv6_TESTS; i++) {
> - saddr = (const struct in6_addr *)(random_buf + i);
> - daddr = (const struct in6_addr *)(random_buf + i +
> - daddr_offset);
> - len = *(unsigned int *)(random_buf + i + len_offset);
> - proto = *(random_buf + i + proto_offset);
> - csum = *(unsigned int *)(random_buf + i + csum_offset);
> - CHECK_EQ((__force __sum16)expected_csum_ipv6_magic[i],
> - csum_ipv6_magic(saddr, daddr, len, proto,
> - (__force __wsum)csum));
> + struct csum_ipv6_magic_data {
> + const struct in6_addr saddr;
> + const struct in6_addr daddr;
> + __be32 len;
> + __wsum csum;
> + unsigned char proto;
> + unsigned char pad[3];
> + } *data;
> + __sum16 csum_result, expected;
> + int ipv6_num_tests = ((MAX_LEN - sizeof(struct csum_ipv6_magic_data)) / WORD_ALIGNMENT);
> +
> + for (int i = 0; i < ipv6_num_tests; i++) {
> + int index = i * WORD_ALIGNMENT;
> +
> + data = (struct csum_ipv6_magic_data *)(random_buf + index);
> +
> + csum_result = csum_ipv6_magic(&data->saddr, &data->daddr,
> + ntohl(data->len), data->proto,
> + data->csum);
> + expected = (__force __sum16)htons(expected_csum_ipv6_magic[i]);
Same, use to_sum16() instead htons() and a forced cast.
> + CHECK_EQ(csum_result, expected);
> }
> #endif /* !CONFIG_NET */
> }
>
Christophe
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests
2024-02-23 10:06 ` [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests Christophe Leroy
@ 2024-02-23 10:28 ` David Laight
2024-02-23 14:49 ` Guenter Roeck
2024-02-23 17:54 ` Charlie Jenkins
1 sibling, 1 reply; 5+ messages in thread
From: David Laight @ 2024-02-23 10:28 UTC (permalink / raw)
To: 'Christophe Leroy', Charlie Jenkins, Guenter Roeck,
Palmer Dabbelt, Andrew Morton, Helge Deller, James E.J. Bottomley,
Parisc List, Al Viro
Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-kernel@vger.kernel.org
From: Christophe Leroy
> Sent: 23 February 2024 10:07
...
> > +/* Ethernet headers are 14 bytes and NET_IP_ALIGN is used to align them */
> > +#define IP_ALIGNMENT (14 + NET_IP_ALIGN)
>
> Only if no VLAN.
>
> When using VLANs it is 4 bytes more. But why do you mind that at all ?
Wasn't one architecture faulting on a double-register read?
Where that had to be aligned (probably 8 bytes) but a normal
memory read could be misaligned?
I doubt it is valid to assume that the IP headers is 8 byte
aligned when NET_IP_ALIGN is 2.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests
2024-02-23 10:28 ` David Laight
@ 2024-02-23 14:49 ` Guenter Roeck
0 siblings, 0 replies; 5+ messages in thread
From: Guenter Roeck @ 2024-02-23 14:49 UTC (permalink / raw)
To: David Laight, 'Christophe Leroy', Charlie Jenkins,
Palmer Dabbelt, Andrew Morton, Helge Deller, James E.J. Bottomley,
Parisc List, Al Viro
Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-kernel@vger.kernel.org
On 2/23/24 02:28, David Laight wrote:
> From: Christophe Leroy
>> Sent: 23 February 2024 10:07
> ...
>>> +/* Ethernet headers are 14 bytes and NET_IP_ALIGN is used to align them */
>>> +#define IP_ALIGNMENT (14 + NET_IP_ALIGN)
>>
>> Only if no VLAN.
>>
>> When using VLANs it is 4 bytes more. But why do you mind that at all ?
>
> Wasn't one architecture faulting on a double-register read?
> Where that had to be aligned (probably 8 bytes) but a normal
> memory read could be misaligned?
>
That was hppa64, and the problem was with its qemu emulation,
not with this code.
Guenter
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests
2024-02-23 10:06 ` [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests Christophe Leroy
2024-02-23 10:28 ` David Laight
@ 2024-02-23 17:54 ` Charlie Jenkins
1 sibling, 0 replies; 5+ messages in thread
From: Charlie Jenkins @ 2024-02-23 17:54 UTC (permalink / raw)
To: Christophe Leroy
Cc: Parisc List, netdev@vger.kernel.org, Helge Deller,
linux-kernel@vger.kernel.org, James E.J. Bottomley, David Laight,
Palmer Dabbelt, Andrew Morton, linuxppc-dev@lists.ozlabs.org,
Guenter Roeck
On Fri, Feb 23, 2024 at 10:06:54AM +0000, Christophe Leroy wrote:
>
>
> Le 22/02/2024 à 03:55, Charlie Jenkins a écrit :
> > The test cases for ip_fast_csum and csum_ipv6_magic were failing on a
> > variety of architectures that are big endian or do not support
> > misalgined accesses. Both of these test cases are changed to support big
> > and little endian architectures.
>
> It is unclear. The endianess issue and the alignment issue are two
> independant subjects that should be handled in separate patches.
>
> According to the subject of this patch, only misaligned accesses should
> be handled here. Endianness should have been fixed by patch 1.
>
> Also, would be nice to give exemple of architecture that has such
> problem, and explain what is the problem exactly.
>
> >
> > The test for ip_fast_csum is changed to align the data along (14 +
> > NET_IP_ALIGN) bytes which is the alignment of an IP header. The test for
> > csum_ipv6_magic aligns the data using a struct. An extra padding field
> > is added to the struct to ensure that the size of the struct is the same
> > on all architectures (44 bytes).
>
> What is the purpose of that padding ? You take fields one by one and
> never do anything with the full struct.
sizeof(struct csum_ipv6_magic_data)) takes into account the full struct.
>
> >
> > Fixes: 6f4c45cbcb00 ("kunit: Add tests for csum_ipv6_magic and ip_fast_csum")
> > Signed-off-by: Charlie Jenkins <charlie@rivosinc.com>
> > Reviewed-by: Guenter Roeck <linux@roeck-us.net>
> > Tested-by: Guenter Roeck <linux@roeck-us.net>
> > ---
> > lib/checksum_kunit.c | 393 ++++++++++++++++++---------------------------------
> > 1 file changed, 134 insertions(+), 259 deletions(-)
> >
> > diff --git a/lib/checksum_kunit.c b/lib/checksum_kunit.c
> > index 776ad3d6d5a1..f1b18e3628dd 100644
> > --- a/lib/checksum_kunit.c
> > +++ b/lib/checksum_kunit.c
> > @@ -13,8 +13,9 @@
> >
> > #define IPv4_MIN_WORDS 5
> > #define IPv4_MAX_WORDS 15
> > -#define NUM_IPv6_TESTS 200
> > -#define NUM_IP_FAST_CSUM_TESTS 181
> > +#define WORD_ALIGNMENT 4
>
> Is that macro really needed ? Can't you just use sizeof(u32) or
> something similar ?
It is for readability purposes. This was introduced to ensure that
alignment on a 32-bit boundary was happening, so I called this word
alignment.
>
>
> > +/* Ethernet headers are 14 bytes and NET_IP_ALIGN is used to align them */
> > +#define IP_ALIGNMENT (14 + NET_IP_ALIGN)
>
> Only if no VLAN.
>
> When using VLANs it is 4 bytes more. But why do you mind that at all
> ?
Architectures make assumptions about the alignment of the packets to
optimize code. Not doing this alignment will cause illegal misaligned
accesses on some ARM platforms. Yes, VLEN is ignored here, but this
alignment is required to be supported and that is what the test cases
are stressing.
>
> >
> > /* Values for a little endian CPU. Byte swap each half on big endian CPU. */
> > static const u32 random_init_sum = 0x2847aab;
>
> ...
>
> > @@ -578,15 +451,19 @@ static void test_csum_no_carry_inputs(struct kunit *test)
> > static void test_ip_fast_csum(struct kunit *test)
> > {
> > __sum16 csum_result, expected;
> > -
> > - for (int len = IPv4_MIN_WORDS; len < IPv4_MAX_WORDS; len++) {
> > - for (int index = 0; index < NUM_IP_FAST_CSUM_TESTS; index++) {
> > - csum_result = ip_fast_csum(random_buf + index, len);
> > - expected = (__force __sum16)
> > - expected_fast_csum[(len - IPv4_MIN_WORDS) *
> > - NUM_IP_FAST_CSUM_TESTS +
> > - index];
> > - CHECK_EQ(expected, csum_result);
> > + int num_tests = (MAX_LEN / WORD_ALIGNMENT - IPv4_MAX_WORDS * WORD_ALIGNMENT);
> > +
> > + for (int i = 0; i < num_tests; i++) {
> > + memcpy(&tmp_buf[IP_ALIGNMENT],
> > + random_buf + (i * WORD_ALIGNMENT),
> > + IPv4_MAX_WORDS * WORD_ALIGNMENT);
>
> That looks weird.
If you have constructive feedback then I would be happy to clarify.
>
> > +
> > + for (int len = IPv4_MIN_WORDS; len <= IPv4_MAX_WORDS; len++) {
> > + int index = (len - IPv4_MIN_WORDS) +
> > + i * ((IPv4_MAX_WORDS - IPv4_MIN_WORDS) + 1);
>
> Missing blank line after declaration.
>
> > + csum_result = ip_fast_csum(tmp_buf + IP_ALIGNMENT, len);
> > + expected = (__force __sum16)htons(expected_fast_csum[index]);
>
> You must do proper type conversion using to_sum16(), not a forced cast.
>
to_sum16 also does a forced cast, if to_sum16 is a "proper type
conversion", then this is as well. It seems to be an arbitrary to me,
but I can make it to_sum16 since it makes no difference.
> > + CHECK_EQ(csum_result, expected);
> > }
> > }
> > }
> > @@ -594,29 +471,27 @@ static void test_ip_fast_csum(struct kunit *test)
> > static void test_csum_ipv6_magic(struct kunit *test)
> > {
> > #if defined(CONFIG_NET)
> > - const struct in6_addr *saddr;
> > - const struct in6_addr *daddr;
> > - unsigned int len;
> > - unsigned char proto;
> > - unsigned int csum;
> > -
> > - const int daddr_offset = sizeof(struct in6_addr);
> > - const int len_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr);
> > - const int proto_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr) +
> > - sizeof(int);
> > - const int csum_offset = sizeof(struct in6_addr) + sizeof(struct in6_addr) +
> > - sizeof(int) + sizeof(char);
> > -
> > - for (int i = 0; i < NUM_IPv6_TESTS; i++) {
> > - saddr = (const struct in6_addr *)(random_buf + i);
> > - daddr = (const struct in6_addr *)(random_buf + i +
> > - daddr_offset);
> > - len = *(unsigned int *)(random_buf + i + len_offset);
> > - proto = *(random_buf + i + proto_offset);
> > - csum = *(unsigned int *)(random_buf + i + csum_offset);
> > - CHECK_EQ((__force __sum16)expected_csum_ipv6_magic[i],
> > - csum_ipv6_magic(saddr, daddr, len, proto,
> > - (__force __wsum)csum));
> > + struct csum_ipv6_magic_data {
> > + const struct in6_addr saddr;
> > + const struct in6_addr daddr;
> > + __be32 len;
> > + __wsum csum;
> > + unsigned char proto;
> > + unsigned char pad[3];
> > + } *data;
> > + __sum16 csum_result, expected;
> > + int ipv6_num_tests = ((MAX_LEN - sizeof(struct csum_ipv6_magic_data)) / WORD_ALIGNMENT);
> > +
> > + for (int i = 0; i < ipv6_num_tests; i++) {
> > + int index = i * WORD_ALIGNMENT;
> > +
> > + data = (struct csum_ipv6_magic_data *)(random_buf + index);
> > +
> > + csum_result = csum_ipv6_magic(&data->saddr, &data->daddr,
> > + ntohl(data->len), data->proto,
> > + data->csum);
> > + expected = (__force __sum16)htons(expected_csum_ipv6_magic[i]);
>
> Same, use to_sum16() instead htons() and a forced cast.
>
> > + CHECK_EQ(csum_result, expected);
> > }
> > #endif /* !CONFIG_NET */
> > }
> >
>
>
> Christophe
- Charlie
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-02-24 11:45 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20240221-fix_sparse_errors_checksum_tests-v9-0-bff4d73ab9d1@rivosinc.com>
[not found] ` <20240221-fix_sparse_errors_checksum_tests-v9-1-bff4d73ab9d1@rivosinc.com>
2024-02-23 9:13 ` [PATCH v9 1/2] lib: checksum: Fix type casting in checksum kunits Christophe Leroy
[not found] ` <20240221-fix_sparse_errors_checksum_tests-v9-2-bff4d73ab9d1@rivosinc.com>
2024-02-23 10:06 ` [PATCH v9 2/2] lib: checksum: Use aligned accesses for ip_fast_csum and csum_ipv6_magic tests Christophe Leroy
2024-02-23 10:28 ` David Laight
2024-02-23 14:49 ` Guenter Roeck
2024-02-23 17:54 ` Charlie Jenkins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).