* [PATCH bpf-next] xsk: avoid double checking against rx queue being full
@ 2026-02-18 15:00 Maciej Fijalkowski
2026-02-19 19:15 ` Stanislav Fomichev
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Maciej Fijalkowski @ 2026-02-18 15:00 UTC (permalink / raw)
To: bpf, ast, daniel, andrii
Cc: netdev, magnus.karlsson, stfomichev, Maciej Fijalkowski
Currently non-zc xsk rx path for multi-buffer case checks twice if xsk
rx queue has enough space for producing descriptors:
1.
if (xskq_prod_nb_free(xs->rx, num_desc) < num_desc) {
xs->rx_queue_full++;
return -ENOBUFS;
}
2.
__xsk_rcv_zc(xs, xskb, copied - meta_len, rem ? XDP_PKT_CONTD : 0);
-> err = xskq_prod_reserve_desc(xs->rx, addr, len, flags);
-> if (xskq_prod_is_full(q))
Second part is redundant as in 1. we already peeked onto rx queue and
checked that there is enough space to produce given amount of
descriptors.
Provide helper functions that will skip it and therefore optimize code.
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
net/xdp/xsk.c | 14 +++++++++++++-
net/xdp/xsk_queue.h | 16 +++++++++++-----
2 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index f093c3453f64..aaadc13649e1 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -160,6 +160,17 @@ static int __xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff_xsk *xskb, u32 len,
return 0;
}
+static void __xsk_rcv_zc_safe(struct xdp_sock *xs, struct xdp_buff_xsk *xskb,
+ u32 len, u32 flags)
+{
+ u64 addr;
+
+ addr = xp_get_handle(xskb, xskb->pool);
+ __xskq_prod_reserve_desc(xs->rx, addr, len, flags);
+
+ xp_release(xskb);
+}
+
static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
{
struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);
@@ -292,7 +303,8 @@ static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
rem -= copied;
xskb = container_of(xsk_xdp, struct xdp_buff_xsk, xdp);
- __xsk_rcv_zc(xs, xskb, copied - meta_len, rem ? XDP_PKT_CONTD : 0);
+ __xsk_rcv_zc_safe(xs, xskb, copied - meta_len,
+ rem ? XDP_PKT_CONTD : 0);
meta_len = 0;
} while (rem);
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 1eb8d9f8b104..4f764b5748d2 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -440,20 +440,26 @@ static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_de
q->cached_prod = cached_prod;
}
-static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
- u64 addr, u32 len, u32 flags)
+static inline void __xskq_prod_reserve_desc(struct xsk_queue *q,
+ u64 addr, u32 len, u32 flags)
{
struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring;
u32 idx;
- if (xskq_prod_is_full(q))
- return -ENOBUFS;
-
/* A, matches D */
idx = q->cached_prod++ & q->ring_mask;
ring->desc[idx].addr = addr;
ring->desc[idx].len = len;
ring->desc[idx].options = flags;
+}
+
+static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
+ u64 addr, u32 len, u32 flags)
+{
+ if (xskq_prod_is_full(q))
+ return -ENOBUFS;
+
+ __xskq_prod_reserve_desc(q, addr, len, flags);
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH bpf-next] xsk: avoid double checking against rx queue being full
2026-02-18 15:00 [PATCH bpf-next] xsk: avoid double checking against rx queue being full Maciej Fijalkowski
@ 2026-02-19 19:15 ` Stanislav Fomichev
2026-02-20 1:06 ` Jason Xing
2026-02-25 1:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Stanislav Fomichev @ 2026-02-19 19:15 UTC (permalink / raw)
To: Maciej Fijalkowski; +Cc: bpf, ast, daniel, andrii, netdev, magnus.karlsson
On 02/18, Maciej Fijalkowski wrote:
> Currently non-zc xsk rx path for multi-buffer case checks twice if xsk
> rx queue has enough space for producing descriptors:
> 1.
> if (xskq_prod_nb_free(xs->rx, num_desc) < num_desc) {
> xs->rx_queue_full++;
> return -ENOBUFS;
> }
> 2.
> __xsk_rcv_zc(xs, xskb, copied - meta_len, rem ? XDP_PKT_CONTD : 0);
> -> err = xskq_prod_reserve_desc(xs->rx, addr, len, flags);
> -> if (xskq_prod_is_full(q))
>
> Second part is redundant as in 1. we already peeked onto rx queue and
> checked that there is enough space to produce given amount of
> descriptors.
>
> Provide helper functions that will skip it and therefore optimize code.
>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH bpf-next] xsk: avoid double checking against rx queue being full
2026-02-18 15:00 [PATCH bpf-next] xsk: avoid double checking against rx queue being full Maciej Fijalkowski
2026-02-19 19:15 ` Stanislav Fomichev
@ 2026-02-20 1:06 ` Jason Xing
2026-02-25 1:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: Jason Xing @ 2026-02-20 1:06 UTC (permalink / raw)
To: Maciej Fijalkowski
Cc: bpf, ast, daniel, andrii, netdev, magnus.karlsson, stfomichev
Hi Maciej,
On Wed, Feb 18, 2026 at 11:03 PM Maciej Fijalkowski
<maciej.fijalkowski@intel.com> wrote:
>
> Currently non-zc xsk rx path for multi-buffer case checks twice if xsk
> rx queue has enough space for producing descriptors:
> 1.
> if (xskq_prod_nb_free(xs->rx, num_desc) < num_desc) {
> xs->rx_queue_full++;
> return -ENOBUFS;
> }
> 2.
> __xsk_rcv_zc(xs, xskb, copied - meta_len, rem ? XDP_PKT_CONTD : 0);
> -> err = xskq_prod_reserve_desc(xs->rx, addr, len, flags);
> -> if (xskq_prod_is_full(q))
>
> Second part is redundant as in 1. we already peeked onto rx queue and
> checked that there is enough space to produce given amount of
> descriptors.
>
> Provide helper functions that will skip it and therefore optimize code.
>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
> net/xdp/xsk.c | 14 +++++++++++++-
> net/xdp/xsk_queue.h | 16 +++++++++++-----
> 2 files changed, 24 insertions(+), 6 deletions(-)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index f093c3453f64..aaadc13649e1 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -160,6 +160,17 @@ static int __xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff_xsk *xskb, u32 len,
> return 0;
> }
>
> +static void __xsk_rcv_zc_safe(struct xdp_sock *xs, struct xdp_buff_xsk *xskb,
> + u32 len, u32 flags)
This helper is used only here, so I wonder what the need of it is?
Thanks,
Jason
> +{
> + u64 addr;
> +
> + addr = xp_get_handle(xskb, xskb->pool);
> + __xskq_prod_reserve_desc(xs->rx, addr, len, flags);
> +
> + xp_release(xskb);
> +}
> +
> static int xsk_rcv_zc(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
> {
> struct xdp_buff_xsk *xskb = container_of(xdp, struct xdp_buff_xsk, xdp);
> @@ -292,7 +303,8 @@ static int __xsk_rcv(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
> rem -= copied;
>
> xskb = container_of(xsk_xdp, struct xdp_buff_xsk, xdp);
> - __xsk_rcv_zc(xs, xskb, copied - meta_len, rem ? XDP_PKT_CONTD : 0);
> + __xsk_rcv_zc_safe(xs, xskb, copied - meta_len,
> + rem ? XDP_PKT_CONTD : 0);
> meta_len = 0;
> } while (rem);
>
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index 1eb8d9f8b104..4f764b5748d2 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -440,20 +440,26 @@ static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_de
> q->cached_prod = cached_prod;
> }
>
> -static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
> - u64 addr, u32 len, u32 flags)
> +static inline void __xskq_prod_reserve_desc(struct xsk_queue *q,
> + u64 addr, u32 len, u32 flags)
> {
> struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring;
> u32 idx;
>
> - if (xskq_prod_is_full(q))
> - return -ENOBUFS;
> -
> /* A, matches D */
> idx = q->cached_prod++ & q->ring_mask;
> ring->desc[idx].addr = addr;
> ring->desc[idx].len = len;
> ring->desc[idx].options = flags;
> +}
> +
> +static inline int xskq_prod_reserve_desc(struct xsk_queue *q,
> + u64 addr, u32 len, u32 flags)
> +{
> + if (xskq_prod_is_full(q))
> + return -ENOBUFS;
> +
> + __xskq_prod_reserve_desc(q, addr, len, flags);
>
> return 0;
> }
> --
> 2.43.0
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH bpf-next] xsk: avoid double checking against rx queue being full
2026-02-18 15:00 [PATCH bpf-next] xsk: avoid double checking against rx queue being full Maciej Fijalkowski
2026-02-19 19:15 ` Stanislav Fomichev
2026-02-20 1:06 ` Jason Xing
@ 2026-02-25 1:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-25 1:20 UTC (permalink / raw)
To: Maciej Fijalkowski
Cc: bpf, ast, daniel, andrii, netdev, magnus.karlsson, stfomichev
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Wed, 18 Feb 2026 16:00:00 +0100 you wrote:
> Currently non-zc xsk rx path for multi-buffer case checks twice if xsk
> rx queue has enough space for producing descriptors:
> 1.
> if (xskq_prod_nb_free(xs->rx, num_desc) < num_desc) {
> xs->rx_queue_full++;
> return -ENOBUFS;
> }
> 2.
> __xsk_rcv_zc(xs, xskb, copied - meta_len, rem ? XDP_PKT_CONTD : 0);
> -> err = xskq_prod_reserve_desc(xs->rx, addr, len, flags);
> -> if (xskq_prod_is_full(q))
>
> [...]
Here is the summary with links:
- [bpf-next] xsk: avoid double checking against rx queue being full
https://git.kernel.org/bpf/bpf-next/c/f620af11c27b
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-02-25 1:20 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-18 15:00 [PATCH bpf-next] xsk: avoid double checking against rx queue being full Maciej Fijalkowski
2026-02-19 19:15 ` Stanislav Fomichev
2026-02-20 1:06 ` Jason Xing
2026-02-25 1:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox