From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F4803E0C69; Wed, 4 Mar 2026 18:24:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648697; cv=none; b=pJGj+awl/dnvPKUzP7Kf9PMB7YvDFSXh5RJTRPnz3W1iSaFXEpTZieo7bir0+rElUdakCPGZy0vUuAmYlZrDBQry8fTgEuNKASfW0pPI6ZOryJ4tfdnfwQQNDsYHwnOVIkcW/LDaeMO0OlwQyPQKhZmIknQEIh0NMI5a8PnLjWc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648697; c=relaxed/simple; bh=V1cxfz1RVnwzXZRXsNSqaV7vjhBBeY9wugZ2uCUAE30=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=FcYfUYwGtqUhG7TkQwDAsoXidHwOCkcXivGGHAaDSo1HYmZhg48Njx8cbD1rG/B78AJyYb/fgK9h6O4XveEhjbnnngx0EVkhhbbY1ylVWxkrnJdNfxR8TRjtCNJB6RMoXiU+r9XrCjt60/X2aYzx3nvPqF1EMxuy9YThl+KegX4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=WO7lo1Xn; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="WO7lo1Xn" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id 2B18E1A2CB5; Wed, 4 Mar 2026 18:24:55 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 008675FF5C; Wed, 4 Mar 2026 18:24:55 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 47E8E103695EE; Wed, 4 Mar 2026 19:24:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648693; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=u1s91F5grS0fyQv6iyaQkfETkK6uLonCQoWR9iCw2sk=; b=WO7lo1XnDtILDDaRKOWMMFgOaKZWlrIjV14E2hymfx2CBiuOobTb/6fz8nQqAn+oVF0xRU EXsqFXH2jdh9jR3uQkZZlToYwRAQuBzqi3OSu1ou56etETfZR7/EkIL5owpf7uR2DUX0I4 qA48joQlgSWH1B9YXp/xC2fsUwRdhH5xLg6ya/SIZmprpTx5fPMXEL6F7mHQWDkUxcsHEX 03BX7D1La37q24DubVKojI4ud8/QJ51bfMA26IEiVzp+24n8fosZ0f+b2tacU8lE9SMkbj K6gRKXSXDVTeC9pmDvuKgnfrhqE7ZFgifVYFrK/0E962zpjOGObjebQS36mn6A== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:29 +0100 Subject: [PATCH net-next 6/8] net: macb: add infrastructure for XSK buffer pool Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Message-Id: <20260304-macb-xsk-v1-6-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 Store a XSK buffer pool per queue, assigned through .ndo_bpf() with command == XDP_SETUP_XSK_POOL. We have no sequence upstream to disable a single queue, free its buffers, refill it and re-enable the queue (without affecting other queues). Therefore we protect our operation with interface-wide close and open. Also, prepare the terrain with a .ndo_xsk_wakeup() operation that does the pre-flight-checks but is a no-op. Signed-off-by: Théo Lebrun --- drivers/net/ethernet/cadence/macb.h | 1 + drivers/net/ethernet/cadence/macb_main.c | 66 +++++++++++++++++++++++++++++++- 2 files changed, 66 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h index 009a44e94726..a9e6f0289ecb 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -1278,6 +1278,7 @@ struct macb_queue { struct napi_struct napi_rx; struct queue_stats stats; struct page_pool *page_pool; + struct xsk_buff_pool *xsk_pool; struct sk_buff *skb; struct xdp_rxq_info xdp_rxq; }; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 65c2ec2a843c..a72d59ffd1cf 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -38,6 +38,7 @@ #include #include #include +#include #include "macb.h" /* This structure is only used for MACB on SiFive FU540 devices */ @@ -1564,6 +1565,24 @@ static int gem_xdp_xmit(struct net_device *dev, int num_frame, return xmitted; } +static int gem_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) +{ + struct macb *bp = netdev_priv(dev); + struct macb_queue *queue = &bp->queues[qid]; + + if (unlikely(!netif_carrier_ok(dev))) + return -ENETDOWN; + + if (unlikely(qid >= bp->num_queues || + !rcu_access_pointer(bp->prog) || + !queue->xsk_pool)) + return -ENXIO; + + /* no-op, until rx/tx implement XSK support */ + + return 0; +} + static u32 gem_xdp_run(struct macb_queue *queue, void *buff_head, unsigned int *len, unsigned int *headroom, dma_addr_t addr) @@ -3580,6 +3599,46 @@ static int gem_xdp_setup(struct net_device *dev, struct bpf_prog *prog, return err; } +static int gem_xdp_setup_xsk_pool(struct net_device *netdev, + struct xsk_buff_pool *pool, u16 qid) +{ + struct macb *bp = netdev_priv(netdev); + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING; + struct macb_queue *queue = &bp->queues[qid]; + bool running = netif_running(netdev); + struct device *dev = &bp->pdev->dev; + int err = 0; + + if (qid >= bp->num_queues) + return -EINVAL; + + if (pool && queue->xsk_pool) + return -EBUSY; + + if (running) + macb_close(netdev); + + if (pool) { + err = xsk_pool_dma_map(pool, dev, attrs); + if (err) + netdev_err(netdev, "xdp: failed to DMA map XSK pool\n"); + else + queue->xsk_pool = pool; + } else { + if (queue->xsk_pool) + xsk_pool_dma_unmap(queue->xsk_pool, attrs); + queue->xsk_pool = NULL; + } + + if (running) { + int err_open = macb_open(netdev); + + err = err ?: err_open; + } + + return err; +} + static int gem_xdp(struct net_device *dev, struct netdev_bpf *xdp) { struct macb *bp = netdev_priv(dev); @@ -3590,6 +3649,9 @@ static int gem_xdp(struct net_device *dev, struct netdev_bpf *xdp) switch (xdp->command) { case XDP_SETUP_PROG: return gem_xdp_setup(dev, xdp->prog, xdp->extack); + case XDP_SETUP_XSK_POOL: + return gem_xdp_setup_xsk_pool(dev, xdp->xsk.pool, + xdp->xsk.queue_id); default: return -EOPNOTSUPP; } @@ -4852,6 +4914,7 @@ static const struct net_device_ops macb_netdev_ops = { .ndo_setup_tc = macb_setup_tc, .ndo_bpf = gem_xdp, .ndo_xdp_xmit = gem_xdp_xmit, + .ndo_xsk_wakeup = gem_xsk_wakeup, }; /* Configure peripheral capabilities according to device tree @@ -6156,7 +6219,8 @@ static int macb_probe(struct platform_device *pdev) dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | - NETDEV_XDP_ACT_NDO_XMIT; + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; } netif_carrier_off(dev); -- 2.53.0