From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: "Alexander Lobakin" <aleksander.lobakin@intel.com>,
"Lorenzo Bianconi" <lorenzo@kernel.org>,
"Daniel Xu" <dxu@dxuuu.xyz>,
"Alexei Starovoitov" <ast@kernel.org>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Andrii Nakryiko" <andrii@kernel.org>,
"John Fastabend" <john.fastabend@gmail.com>,
"Toke Høiland-Jørgensen" <toke@kernel.org>,
"Jesper Dangaard Brouer" <hawk@kernel.org>,
"Martin KaFai Lau" <martin.lau@linux.dev>,
netdev@vger.kernel.org, bpf@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: [PATCH net-next v2 6/8] bpf: cpumap: switch to napi_skb_cache_get_bulk()
Date: Tue, 7 Jan 2025 16:29:38 +0100 [thread overview]
Message-ID: <20250107152940.26530-7-aleksander.lobakin@intel.com> (raw)
In-Reply-To: <20250107152940.26530-1-aleksander.lobakin@intel.com>
Now that cpumap uses GRO, which drops unused skb heads to the NAPI
cache, use napi_skb_cache_get_bulk() to try to reuse cached entries
and lower MM layer pressure. Always disable the BH before checking and
running the cpumap-pinned XDP prog and don't re-enable it in between
that and allocating an skb bulk, as we can access the NAPI caches only
from the BH context.
The better GRO aggregates packets, the less new skbs will be allocated.
If an aggregated skb contains 16 frags, this means 15 skbs were returned
to the cache, so next 15 skbs will be built without allocating anything.
The same trafficgen UDP GRO test now shows:
GRO off GRO on
threaded GRO 2.3 4 Mpps
thr bulk GRO 2.4 4.7 Mpps
diff +4 +17 %
Comparing to the baseline cpumap:
baseline 2.7 N/A Mpps
thr bulk GRO 2.4 4.7 Mpps
diff -11 +74 %
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Daniel Xu <dxu@dxuuu.xyz>
---
kernel/bpf/cpumap.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 4c3eeb931bff..c6b7f0e1ba61 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -253,7 +253,7 @@ static bool cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
if (!rcpu->prog)
goto out;
- rcu_read_lock_bh();
+ rcu_read_lock();
bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
@@ -265,7 +265,7 @@ static bool cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
xdp_do_flush();
bpf_net_ctx_clear(bpf_net_ctx);
- rcu_read_unlock_bh(); /* resched point, may call do_softirq() */
+ rcu_read_unlock();
out:
if (unlikely(ret->skb_n) && ret->xdp_n)
@@ -305,7 +305,6 @@ static int cpu_map_kthread_run(void *data)
while (!kthread_should_stop() || !__ptr_ring_empty(rcpu->queue)) {
struct xdp_cpumap_stats stats = {}; /* zero stats */
unsigned int kmem_alloc_drops = 0, sched = 0;
- gfp_t gfp = __GFP_ZERO | GFP_ATOMIC;
struct cpu_map_ret ret = { };
void *frames[CPUMAP_BATCH];
void *skbs[CPUMAP_BATCH];
@@ -357,21 +356,19 @@ static int cpu_map_kthread_run(void *data)
prefetchw(page);
}
+ local_bh_disable();
+
/* Support running another XDP prog on this CPU */
- if (!cpu_map_bpf_prog_run(rcpu, frames, skbs, &ret, &stats)) {
- local_bh_disable();
+ if (!cpu_map_bpf_prog_run(rcpu, frames, skbs, &ret, &stats))
goto stats;
- }
- m = kmem_cache_alloc_bulk(net_hotdata.skbuff_cache, gfp,
- ret.xdp_n, skbs);
+ m = napi_skb_cache_get_bulk(skbs, ret.xdp_n);
if (unlikely(!m)) {
for (i = 0; i < ret.xdp_n; i++)
skbs[i] = NULL; /* effect: xdp_return_frame */
kmem_alloc_drops += ret.xdp_n;
}
- local_bh_disable();
for (i = 0; i < ret.xdp_n; i++) {
struct xdp_frame *xdpf = frames[i];
struct sk_buff *skb = skbs[i];
--
2.47.1
next prev parent reply other threads:[~2025-01-07 15:31 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-07 15:29 [PATCH net-next v2 0/8] bpf: cpumap: enable GRO for XDP_PASS frames Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 1/8] net: gro: decouple GRO from the NAPI layer Alexander Lobakin
2025-01-09 14:24 ` Paolo Abeni
2025-01-13 13:50 ` Alexander Lobakin
2025-01-13 21:01 ` Jakub Kicinski
2025-01-14 17:19 ` Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 2/8] net: gro: expose GRO init/cleanup to use outside of NAPI Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 3/8] bpf: cpumap: switch to GRO from netif_receive_skb_list() Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 4/8] bpf: cpumap: reuse skb array instead of a linked list to chain skbs Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 5/8] net: skbuff: introduce napi_skb_cache_get_bulk() Alexander Lobakin
2025-01-09 13:16 ` Paolo Abeni
2025-01-13 13:47 ` Alexander Lobakin
2025-01-07 15:29 ` Alexander Lobakin [this message]
2025-01-07 15:29 ` [PATCH net-next v2 7/8] veth: use napi_skb_cache_get_bulk() instead of xdp_alloc_skb_bulk() Alexander Lobakin
2025-01-07 15:29 ` [PATCH net-next v2 8/8] xdp: remove xdp_alloc_skb_bulk() Alexander Lobakin
2025-01-07 17:17 ` [PATCH net-next v2 0/8] bpf: cpumap: enable GRO for XDP_PASS frames Jesper Dangaard Brouer
2025-01-08 13:39 ` Alexander Lobakin
2025-01-09 17:02 ` Toke Høiland-Jørgensen
2025-01-13 13:50 ` Alexander Lobakin
2025-01-09 1:26 ` Daniel Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250107152940.26530-7-aleksander.lobakin@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dxu@dxuuu.xyz \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lorenzo@kernel.org \
--cc=martin.lau@linux.dev \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=toke@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox