From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8B680D216A4 for ; Thu, 4 Dec 2025 15:51:58 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 5A7B96179A; Thu, 4 Dec 2025 15:51:58 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id N3mXB4TgC5BP; Thu, 4 Dec 2025 15:51:56 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.166.142; helo=lists1.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 3FCF261549 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osuosl.org; s=default; t=1764863516; bh=rPhDl1rM+ea2t8fSzxjz4Dv5XGGddSsDa9b9wisEdEA=; h=From:To:Cc:Date:In-Reply-To:References:Subject:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=IqSkM1kay1IXYpOInyRKkpKmcXI7kUXoYPHOTdKxXAWbYPlQbs4EcTR8tEUpgxUce vMrzZ+SvnH5F7mvrNX7dU8e8nq0T75XWq9WsBFKuhcZ/6eWSZZ7GMBnrU0HWlaGYBT wWDW6rjZ4SMjS3zYLnmzua/6SzYF2d+fMaDkPlUjulNsjGJdE79r/K1KyfKaZoDzXn ll4ed3Cl9wXJdNiuKqeZrHxJIoLtAYu73jQLhbZVIc1ZJWSg5dE2KVisY4Yy6PRBXV Lm2P8mQ/1qX3eUx1NT12QJ47AaMaBiEinRPqwU/2LPPCwL9vdSKj5HOMFGpoqZaUEV ZHOvGmmuuHRCA== Received: from lists1.osuosl.org (lists1.osuosl.org [140.211.166.142]) by smtp3.osuosl.org (Postfix) with ESMTP id 3FCF261549; Thu, 4 Dec 2025 15:51:56 +0000 (UTC) Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) by lists1.osuosl.org (Postfix) with ESMTP id 4C428E7 for ; Thu, 4 Dec 2025 15:51:55 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 3E24161549 for ; Thu, 4 Dec 2025 15:51:55 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id mi8klyVai1yf for ; Thu, 4 Dec 2025 15:51:54 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=192.198.163.7; helo=mgamail.intel.com; envelope-from=aleksander.lobakin@intel.com; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp3.osuosl.org 4241B61030 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 4241B61030 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by smtp3.osuosl.org (Postfix) with ESMTPS id 4241B61030 for ; Thu, 4 Dec 2025 15:51:54 +0000 (UTC) X-CSE-ConnectionGUID: VB7o6SqVRBmfVLo8+pENdg== X-CSE-MsgGUID: bWbLvNglTgmwwagRXSgtOQ== X-IronPort-AV: E=McAfee;i="6800,10657,11632"; a="92365122" X-IronPort-AV: E=Sophos;i="6.20,249,1758610800"; d="scan'208";a="92365122" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2025 07:51:54 -0800 X-CSE-ConnectionGUID: R5UzEMHEQgW0sxAPMd0OeQ== X-CSE-MsgGUID: 5ei7+SQ4Ty6ytCEl+C5d6w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.20,249,1758610800"; d="scan'208";a="194677273" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa007.fm.intel.com with ESMTP; 04 Dec 2025 07:51:51 -0800 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , Przemek Kitszel , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Jacob Keller , Aleksandr Loktionov , nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Date: Thu, 4 Dec 2025 16:51:30 +0100 Message-ID: <20251204155133.2437621-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251204155133.2437621-1-aleksander.lobakin@intel.com> References: <20251204155133.2437621-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1764863514; x=1796399514; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L3OiQcHDPaq3/WHNiZqPxGXFgCaFXawWIxwJ16AU97w=; b=nb0SPnxXp0cVb+Uf6fLVn3c9H0u7W+vqsQkgbLC7PSYAXC2BJmknB79G n2jOfoP3R2R8LwZB5Pmn9L6KUtuKh2l43THuNAZRXxVpLZb7PNKfpf911 F8+CVS0Y8mO7DK/5TTKTdpdhRuuKvtVDfDN6o0vbW3WoGL2QJY1b5FOZZ jt5S9jUPmd+yjLVD6BotE3ilXe7Ps1hwSsHjmpKuYtug0GSu2hlzVuBCV Jt99vIH9hFzwT/p1qnTDjDYN8GJOmfS1rGaDGoGbWCUzB1jlbHymDYMox FAK9QVXSbniEs/Xk6Hm2xz8C9yE8ch2a5qi0eidxe12FS3uD8/QLFM7bA Q==; X-Mailman-Original-Authentication-Results: smtp3.osuosl.org; dmarc=pass (p=none dis=none) header.from=intel.com X-Mailman-Original-Authentication-Results: smtp3.osuosl.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=nb0SPnxX Subject: [Intel-wired-lan] [PATCH iwl-next v2 2/5] libeth: handle creating pools with unreadable buffers X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" libeth uses netmems for quite some time already, so in order to support unreadable frags / memory providers, it only needs to set PP_FLAG_ALLOW_UNREADABLE_NETMEM when needed. Also add a couple sanity checks to make sure the driver didn't mess up the configuration options and, in case when an MP is installed, return the truesize always equal to PAGE_SIZE, so that libeth_rx_alloc() will never try to allocate frags. Memory providers manage buffers on their own and expect 1:1 buffer / HW Rx descriptor association. Bonus: mention in the libeth_sqe_type description that LIBETH_SQE_EMPTY should also be used for netmem Tx SQEs -- they don't need DMA unmapping. Reviewed-by: Jacob Keller Reviewed-by: Aleksandr Loktionov Signed-off-by: Alexander Lobakin --- include/net/libeth/tx.h | 2 +- drivers/net/ethernet/intel/libeth/rx.c | 45 ++++++++++++++++++++++++++ 2 files changed, 46 insertions(+), 1 deletion(-) diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h index c3db5c6f1641..a66fc2b3a114 100644 --- a/include/net/libeth/tx.h +++ b/include/net/libeth/tx.h @@ -12,7 +12,7 @@ /** * enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion - * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX/XSk frame, no action required + * @LIBETH_SQE_EMPTY: empty OR netmem/XDP_TX/XSk frame, no action required * @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required * @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree() * @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c index 9ac3a1448b2f..9b45c9cdd599 100644 --- a/drivers/net/ethernet/intel/libeth/rx.c +++ b/drivers/net/ethernet/intel/libeth/rx.c @@ -6,6 +6,7 @@ #include #include +#include /* Rx buffer management */ @@ -139,9 +140,50 @@ static bool libeth_rx_page_pool_params_zc(struct libeth_fq *fq, fq->buf_len = clamp(mtu, LIBETH_RX_BUF_STRIDE, max); fq->truesize = fq->buf_len; + /* + * Allow frags only for kernel pages. `fq->truesize == pp->max_len` + * will always fall back to regular page_pool_alloc_netmems() + * regardless of the MTU / FQ buffer size. + */ + if (pp->flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) + fq->truesize = pp->max_len; + return true; } +/** + * libeth_rx_page_pool_check_unread - check input params for unreadable MPs + * @fq: buffer queue to check + * @pp: &page_pool_params for the queue + * + * Make sure we don't create an invalid pool with full-frame unreadable + * buffers, bidirectional unreadable buffers or so, and configure the + * ZC payload pool accordingly. + * + * Return: true on success, false on invalid input params. + */ +static bool libeth_rx_page_pool_check_unread(const struct libeth_fq *fq, + struct page_pool_params *pp) +{ + if (!pp->netdev) + return true; + + if (!netif_rxq_has_unreadable_mp(pp->netdev, pp->queue_idx)) + return true; + + /* For now, the core stack doesn't allow XDP with unreadable frags */ + if (fq->xdp) + return false; + + /* It should be either a header pool or a ZC payload pool */ + if (fq->type == LIBETH_FQE_HDR) + return !fq->hsplit; + + pp->flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM; + + return fq->hsplit; +} + /** * libeth_rx_fq_create - create a PP with the default libeth settings * @fq: buffer queue struct to fill @@ -166,6 +208,9 @@ int libeth_rx_fq_create(struct libeth_fq *fq, void *napi_dev) struct page_pool *pool; int ret; + if (!libeth_rx_page_pool_check_unread(fq, &pp)) + return -EINVAL; + pp.dma_dir = fq->xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; if (!fq->hsplit) -- 2.52.0