From: Jason Xing <kerneljasonxing@gmail.com>
To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, bjorn@kernel.org, magnus.karlsson@intel.com,
maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com,
sdf@fomichev.me, ast@kernel.org, daniel@iogearbox.net,
hawk@kernel.org, john.fastabend@gmail.com
Cc: bpf@vger.kernel.org, netdev@vger.kernel.org,
Jason Xing <kernelxing@tencent.com>
Subject: [PATCH net-next v6 1/2] xsk: advance cq/fq check when shared umem is used
Date: Sun, 4 Jan 2026 09:21:24 +0800 [thread overview]
Message-ID: <20260104012125.44003-2-kerneljasonxing@gmail.com> (raw)
In-Reply-To: <20260104012125.44003-1-kerneljasonxing@gmail.com>
From: Jason Xing <kernelxing@tencent.com>
In the shared umem mode with different queues or devices, either
uninitialized cq or fq is not allowed which was previously done in
xp_assign_dev_shared(). The patch advances the check at the beginning
so that 1) we can avoid a few memory allocation and stuff if cq or fq
is NULL, 2) it can be regarded as preparation for the next patch in
the series.
Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
net/xdp/xsk.c | 7 +++++++
net/xdp/xsk_buff_pool.c | 4 ----
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index f093c3453f64..3c52fafae47c 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -1349,6 +1349,13 @@ static int xsk_bind(struct socket *sock, struct sockaddr_unsized *addr, int addr
}
if (umem_xs->queue_id != qid || umem_xs->dev != dev) {
+ /* One fill and completion ring required for each queue id. */
+ if (!xsk_validate_queues(xs)) {
+ err = -EINVAL;
+ sockfd_put(sock);
+ goto out_unlock;
+ }
+
/* Share the umem with another socket on another qid
* and/or device.
*/
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 51526034c42a..6bf84316e2ad 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -247,10 +247,6 @@ int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_sock *umem_xs,
u16 flags;
struct xdp_umem *umem = umem_xs->umem;
- /* One fill and completion ring required for each queue id. */
- if (!pool->fq || !pool->cq)
- return -EINVAL;
-
flags = umem->zc ? XDP_ZEROCOPY : XDP_COPY;
if (umem_xs->pool->uses_need_wakeup)
flags |= XDP_USE_NEED_WAKEUP;
--
2.41.3
next prev parent reply other threads:[~2026-01-04 1:21 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-04 1:21 [PATCH net-next v6 0/2] xsk: move cq_cached_prod_lock Jason Xing
2026-01-04 1:21 ` Jason Xing [this message]
2026-01-04 1:21 ` [PATCH net-next v6 2/2] xsk: move cq_cached_prod_lock to avoid touching a cacheline in sending path Jason Xing
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260104012125.44003-2-kerneljasonxing@gmail.com \
--to=kerneljasonxing@gmail.com \
--cc=ast@kernel.org \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=jonathan.lemon@gmail.com \
--cc=kernelxing@tencent.com \
--cc=kuba@kernel.org \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).