From: Eric Biggers <ebiggers@kernel.org>
To: netdev@vger.kernel.org
Cc: linux-nvme@lists.infradead.org, linux-sctp@vger.kernel.org,
linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
Daniel Borkmann <daniel@iogearbox.net>,
Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>,
Sagi Grimberg <sagi@grimberg.me>,
Ard Biesheuvel <ardb@kernel.org>
Subject: [PATCH v2 02/10] net: add skb_crc32c()
Date: Mon, 19 May 2025 10:50:04 -0700 [thread overview]
Message-ID: <20250519175012.36581-3-ebiggers@kernel.org> (raw)
In-Reply-To: <20250519175012.36581-1-ebiggers@kernel.org>
From: Eric Biggers <ebiggers@google.com>
Add skb_crc32c(), which calculates the CRC32C of a sk_buff. It will
replace __skb_checksum(), which unnecessarily supports arbitrary
checksums. Compared to __skb_checksum(), skb_crc32c():
- Uses the correct type for CRC32C values (u32, not __wsum).
- Does not require the caller to provide a skb_checksum_ops struct.
- Is faster because it does not use indirect calls and does not use
the very slow crc32c_combine().
According to commit 2817a336d4d5 ("net: skb_checksum: allow custom
update/combine for walking skb") which added __skb_checksum(), the
original motivation for the abstraction layer was to avoid code
duplication for CRC32C and other checksums in the future. However:
- No additional checksums showed up after CRC32C. __skb_checksum()
is only used with the "regular" net checksum and CRC32C.
- Indirect calls are expensive. Commit 2544af0344ba ("net: avoid
indirect calls in L4 checksum calculation") worked around this
using the INDIRECT_CALL_1 macro. But that only avoided the indirect
call for the net checksum, and at the cost of an extra branch.
- The checksums use different types (__wsum and u32), causing casts
to be needed.
- It made the checksums of fragments be combined (rather than
chained) for both checksums, despite this being highly
counterproductive for CRC32C due to how slow crc32c_combine() is.
This can clearly be seen in commit 4c2f24549644 ("sctp: linearize
early if it's not GSO") which tried to work around this performance
bug. With a dedicated function for each checksum, we can instead
just use the proper strategy for each checksum.
As shown by the following tables, the new function skb_crc32c() is
faster than __skb_checksum(), with the improvement varying greatly from
5% to 2500% depending on the case. The largest improvements come from
fragmented packets, mainly due to eliminating the inefficient
crc32c_combine(). But linear packets are improved too, especially
shorter ones, mainly due to eliminating indirect calls. These
benchmarks were done on AMD Zen 5. On that CPU, Linux uses IBRS instead
of retpoline; an even greater improvement might be seen with retpoline:
Linear sk_buffs
Length in bytes __skb_checksum cycles skb_crc32c cycles
=============== ===================== =================
64 43 18
256 94 77
1420 204 161
16384 1735 1642
Nonlinear sk_buffs (even split between head and one fragment)
Length in bytes __skb_checksum cycles skb_crc32c cycles
=============== ===================== =================
64 579 22
256 829 77
1420 1506 194
16384 4365 1682
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
include/linux/skbuff.h | 1 +
net/core/skbuff.c | 73 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 74 insertions(+)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index c7397b17bb08e..7ccc6356acaca 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -4201,10 +4201,11 @@ extern const struct skb_checksum_ops *crc32c_csum_stub __read_mostly;
__wsum __skb_checksum(const struct sk_buff *skb, int offset, int len,
__wsum csum, const struct skb_checksum_ops *ops);
__wsum skb_checksum(const struct sk_buff *skb, int offset, int len,
__wsum csum);
+u32 skb_crc32c(const struct sk_buff *skb, int offset, int len, u32 crc);
static inline void * __must_check
__skb_header_pointer(const struct sk_buff *skb, int offset, int len,
const void *data, int hlen, void *buffer)
{
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 4159107f1666c..94b977db47f9d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -62,10 +62,11 @@
#include <linux/bitfield.h>
#include <linux/if_vlan.h>
#include <linux/mpls.h>
#include <linux/kcov.h>
#include <linux/iov_iter.h>
+#include <linux/crc32.h>
#include <net/protocol.h>
#include <net/dst.h>
#include <net/sock.h>
#include <net/checksum.h>
@@ -3631,10 +3632,82 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,
BUG_ON(len);
return csum;
}
EXPORT_SYMBOL(skb_copy_and_csum_bits);
+#ifdef CONFIG_NET_CRC32C
+u32 skb_crc32c(const struct sk_buff *skb, int offset, int len, u32 crc)
+{
+ int start = skb_headlen(skb);
+ int i, copy = start - offset;
+ struct sk_buff *frag_iter;
+
+ if (copy > 0) {
+ copy = min(copy, len);
+ crc = crc32c(crc, skb->data + offset, copy);
+ len -= copy;
+ if (len == 0)
+ return crc;
+ offset += copy;
+ }
+
+ if (WARN_ON_ONCE(!skb_frags_readable(skb)))
+ return 0;
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ int end;
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ WARN_ON(start > offset + len);
+
+ end = start + skb_frag_size(frag);
+ copy = end - offset;
+ if (copy > 0) {
+ u32 p_off, p_len, copied;
+ struct page *p;
+ u8 *vaddr;
+
+ copy = min(copy, len);
+ skb_frag_foreach_page(frag,
+ skb_frag_off(frag) + offset - start,
+ copy, p, p_off, p_len, copied) {
+ vaddr = kmap_atomic(p);
+ crc = crc32c(crc, vaddr + p_off, p_len);
+ kunmap_atomic(vaddr);
+ }
+ len -= copy;
+ if (len == 0)
+ return crc;
+ offset += copy;
+ }
+ start = end;
+ }
+
+ skb_walk_frags(skb, frag_iter) {
+ int end;
+
+ WARN_ON(start > offset + len);
+
+ end = start + frag_iter->len;
+ copy = end - offset;
+ if (copy > 0) {
+ copy = min(copy, len);
+ crc = skb_crc32c(frag_iter, offset - start, copy, crc);
+ len -= copy;
+ if (len == 0)
+ return crc;
+ offset += copy;
+ }
+ start = end;
+ }
+ BUG_ON(len);
+
+ return crc;
+}
+EXPORT_SYMBOL(skb_crc32c);
+#endif /* CONFIG_NET_CRC32C */
+
__sum16 __skb_checksum_complete_head(struct sk_buff *skb, int len)
{
__sum16 sum;
sum = csum_fold(skb_checksum(skb, 0, len, skb->csum));
--
2.49.0
next prev parent reply other threads:[~2025-05-19 17:51 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-19 17:50 [PATCH v2 00/10] net: faster and simpler CRC32C computation Eric Biggers
2025-05-19 17:50 ` [PATCH v2 01/10] net: introduce CONFIG_NET_CRC32C Eric Biggers
2025-05-21 10:09 ` Hannes Reinecke
2025-05-19 17:50 ` Eric Biggers [this message]
2025-05-21 10:12 ` [PATCH v2 02/10] net: add skb_crc32c() Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 03/10] net: use skb_crc32c() in skb_crc32c_csum_help() Eric Biggers
2025-05-21 10:13 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 04/10] RDMA/siw: use skb_crc32c() instead of __skb_checksum() Eric Biggers
2025-05-21 10:13 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 05/10] sctp: " Eric Biggers
2025-05-21 10:14 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 06/10] net: fold __skb_checksum() into skb_checksum() Eric Biggers
2025-05-21 10:14 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 07/10] lib/crc32: remove unused support for CRC32C combination Eric Biggers
2025-05-21 10:15 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 08/10] net: add skb_copy_and_crc32c_datagram_iter() Eric Biggers
2025-05-21 10:16 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 09/10] nvme-tcp: use crc32c() and skb_copy_and_crc32c_datagram_iter() Eric Biggers
2025-05-21 10:17 ` Hannes Reinecke
2025-05-19 17:50 ` [PATCH v2 10/10] net: remove skb_copy_and_hash_datagram_iter() Eric Biggers
2025-05-21 10:17 ` Hannes Reinecke
2025-05-22 14:07 ` [PATCH v2 00/10] net: faster and simpler CRC32C computation Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250519175012.36581-3-ebiggers@kernel.org \
--to=ebiggers@kernel.org \
--cc=ardb@kernel.org \
--cc=daniel@iogearbox.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-sctp@vger.kernel.org \
--cc=marcelo.leitner@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).