From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3639343DA4F; Tue, 28 Apr 2026 14:47:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777387642; cv=none; b=L2YMubDEWx+AYTQy6rKiGIdLnMScE+TZHeJgT83xjG4fcuILA5jPWkuvZJ+JVmleZvbX5aS2TNNr4XqSOgB5e0yRcSZKWLOAodE2iR9Xdpa0K09g1Fag+5IJkLyBZ06soE3nVbOJxBFAhNR1vIiSqECdxMnMomSP3Avj2veOVYo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777387642; c=relaxed/simple; bh=1fYfsj4yZWZDDBERVik02Ea+1v3rF+nqDOOTddpHTY8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=EWTW6nKvWa8MAnsMCRfq8vhvUqM2hFpZN3e84kDx3RSsFMrhC/6oWN+Afky8rfgzb/QBdj8RLhsk2j3uXKUHqtj9vJQTGrE92MGpSPCrdPVP1zF5sfHK9KiMyi4hdHyXKOFcz8t3PmWt5MOcHSpaxTTnHfIS6CwlNRPBf3Al9ks= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CValI+4L; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CValI+4L" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EDD4C2BCB7; Tue, 28 Apr 2026 14:47:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777387642; bh=1fYfsj4yZWZDDBERVik02Ea+1v3rF+nqDOOTddpHTY8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=CValI+4LuVwcLzwIIJWeWWj2KnaCv8LSGUFBnpeAO5fiE/s6oTnTLzZG8XwwxQgB4 PhppZCZnPf54VgwAd2xhoGAlblRyaLjDrMlmUrP/NJSpfJ/JA4DLdsciJy4Lb+dW+C oCTYFsaSzxVz4c71UldGeF/8FRdOq9eW8HJdjEQGMAeAo4v+ALNva6W+pgsoAR5bDc zvpuX/w58GLIzIulpz9BCrYYXmyyLd4rMxVzdmi2XE1fvcT6YKRW5nElllOlcTa+sj 1puv1mtSgqfffmN2yFluKOBn+VQzC53fdP8trwcC+jzxLGVKp1ZQQ+x00RNTb80hfE jq7mDHFsUs5/g== Date: Tue, 28 Apr 2026 17:47:15 +0300 From: Leon Romanovsky To: Tariq Toukan Cc: Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , "David S. Miller" , Saeed Mahameed , Mark Bloch , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Gal Pressman , Dragos Tatulea , Moshe Shemesh , Nimrod Oren Subject: Re: [PATCH net-next 3/3] net/mlx5: use internal dma pools for frag buf alloc Message-ID: <20260428144715.GR440345@unreal> References: <20260428052920.219201-1-tariqt@nvidia.com> <20260428052920.219201-4-tariqt@nvidia.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260428052920.219201-4-tariqt@nvidia.com> On Tue, Apr 28, 2026 at 08:29:20AM +0300, Tariq Toukan wrote: > From: Nimrod Oren > > Add mlx5_dma_pool alloc/free paths, and wire mlx5_frag_buf allocation > and free paths to use them. > > mlx5_frag_buf_alloc_node() now selects an mlx5_dma_pool to allocate > fragments from, instead of directly allocating full coherent pages. > > mlx5_frag_buf_free() frees from the respective pool. > > mlx5_dma_pool_alloc() keeps allocation fast by maintaining pages with > available indexes at the head of the list, so the common allocation path > can take a free index immediately. New backing pages are allocated only > when no free index is available. > > mlx5_dma_pool_free() returns released indexes to the pool and frees a > backing page once all of its indexes become free. This avoids keeping > fully free pages for the lifetime of the pool and reduces coherent DMA > memory footprint. > > Signed-off-by: Nimrod Oren > Signed-off-by: Tariq Toukan > --- > .../net/ethernet/mellanox/mlx5/core/alloc.c | 185 ++++++++++++++---- > include/linux/mlx5/driver.h | 2 + > 2 files changed, 154 insertions(+), 33 deletions(-) <...> > + if (WARN_ONCE(idx >= blocks_per_page, > + "mlx5 dma pool invalid idx: %lu (max %d)\n", > + idx, blocks_per_page - 1)) > + return; <...> > + if (WARN_ONCE(test_bit(idx, page->bitmap), > + "mlx5 dma pool double free: idx=%lu block_shift=%u\n", > + idx, pool->block_shift)) > + goto unlock; <...> > + if (WARN_ONCE(size <= 0, "mlx5_frag_buf non-positive size: %d\n", size)) > + return -EINVAL; <...> > + if (WARN_ONCE(node < 0 || node >= nr_node_ids || !node_possible(node), > + "mlx5_frag_buf invalid node ID: %d\n", node)) > + return -EINVAL; All WARN_ONCE() instances in this patch and the previous one are not reachable. WARN_ONCE() should be used to detect states that are truly impossible, not cases where the internal API is being misused. There is no need for defensive programming when dealing with in-kernel or in-driver APIs. Thanks