From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2C3B2D3F9E; Mon, 5 May 2025 22:53:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746485608; cv=none; b=ss5g6XE89blOok0syi/VdocMEehWSaq6+y0+ZDpiP5Xbry71wJMGGZtkPsMhvKLJeqZUt/TqhkQ+CAKgsLcbhTGSgSrjYLJV4xDWHSJ7LHuElNXT0rjIgfduQ+hsJtnFoPPj62n5Sh5IB4RigC6TVWSiKA7Gu09qOA8pqwUD4ZU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746485608; c=relaxed/simple; bh=yepw7QVUQHFmSgLSQIXX+iRrCnSdlF6rd2Op0jZkti4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ULAAlCHIC8+cRsv7LJKAVTLN+Agbs0mJsvi/zKhDmb1H1pfWlQe+MdH7q2ld7hoL+IGsy6O+d5TQ5WplQPRX1ryDaxp17LIZF/EWLBYKAIOeF/hOG+q8zCq+dYGO0NSBQP/80ZV7XDVEiGcq8hNKWK5x9h6yrU22X4LCV7p4Pt4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=schoKTPh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="schoKTPh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC699C4CEE4; Mon, 5 May 2025 22:53:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1746485607; bh=yepw7QVUQHFmSgLSQIXX+iRrCnSdlF6rd2Op0jZkti4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=schoKTPhRUkZ66GAg2PtnFp032xf1BgVfrrYk4VNbKhfYHcUm5RQ2EpcMNXoPfNgJ G5dclm13AF6AO0KJteyyJVCKhNkKRfAGzDviOU7YcX9bgNED/6nCE/spq+OI5Vvhzr qpyUYK5rmPMUv7QN8A2wqhtMpY7Xo3C2FzUpuB6ywapbC/NDCAD27Q+8ays+lpilQu fz9h87NFnyRE4iNfkV+nii7Ro9rx97pQAqQrSU/lV760EoZeQCia04UVdnm9nLwnNd h7TE65DXgvOiqZ/pmYtqTy9TMUNrdV1FA9MnjNwlnopW5JDuBzQaY1a8kfG6z4tiED 2tZTvoSC7YXig== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: William Tu , Tariq Toukan , Michal Swiatkowski , Jakub Kicinski , Sasha Levin , saeedm@nvidia.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, dtatulea@nvidia.com, alazar@nvidia.com, yorayz@nvidia.com, lkayal@nvidia.com, netdev@vger.kernel.org, linux-rdma@vger.kernel.org Subject: [PATCH AUTOSEL 6.12 383/486] net/mlx5e: reduce the max log mpwrq sz for ECPF and reps Date: Mon, 5 May 2025 18:37:39 -0400 Message-Id: <20250505223922.2682012-383-sashal@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250505223922.2682012-1-sashal@kernel.org> References: <20250505223922.2682012-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.12.26 Content-Transfer-Encoding: 8bit From: William Tu [ Upstream commit e1d68ea58c7e9ebacd9ad7a99b25a3578fa62182 ] For the ECPF and representors, reduce the max MPWRQ size from 256KB (18) to 128KB (17). This prepares the later patch for saving representor memory. With Striding RQ, there is a minimum of 4 MPWQEs. So with 128KB of max MPWRQ size, the minimal memory is 4 * 128KB = 512KB. When creating page pool, consider 1500 mtu, the minimal page pool size will be 512KB/4KB = 128 pages = 256 rx ring entries (2 entries per page). Before this patch, setting RX ringsize (ethtool -G rx) to 256 causes driver to allocate page pool size more than it needs due to max MPWRQ is 256KB (18). Ex: 4 * 256KB = 1MB, 1MB/4KB = 256 pages, but actually 128 pages is good enough. Reducing the max MPWRQ to 128KB fixes the limitation. Signed-off-by: William Tu Signed-off-by: Tariq Toukan Reviewed-by: Michal Swiatkowski Link: https://patch.msgid.link/20250209101716.112774-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 2 -- .../net/ethernet/mellanox/mlx5/core/en/params.c | 15 +++++++++++---- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index d6266f6a96d6e..e048a667e0758 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -94,8 +94,6 @@ struct page_pool; #define MLX5_MPWRQ_DEF_LOG_STRIDE_SZ(mdev) \ MLX5_MPWRQ_LOG_STRIDE_SZ(mdev, order_base_2(MLX5E_RX_MAX_HEAD)) -#define MLX5_MPWRQ_MAX_LOG_WQE_SZ 18 - /* Keep in sync with mlx5e_mpwrq_log_wqe_sz. * These are theoretical maximums, which can be further restricted by * capabilities. These values are used for static resource allocations and diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 8c4d710e85675..58ec5e44aa7ad 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -10,6 +10,9 @@ #include #include +#define MLX5_MPWRQ_MAX_LOG_WQE_SZ 18 +#define MLX5_REP_MPWRQ_MAX_LOG_WQE_SZ 17 + static u8 mlx5e_mpwrq_min_page_shift(struct mlx5_core_dev *mdev) { u8 min_page_shift = MLX5_CAP_GEN_2(mdev, log_min_mkey_entity_size); @@ -103,18 +106,22 @@ u8 mlx5e_mpwrq_log_wqe_sz(struct mlx5_core_dev *mdev, u8 page_shift, enum mlx5e_mpwrq_umr_mode umr_mode) { u8 umr_entry_size = mlx5e_mpwrq_umr_entry_size(umr_mode); - u8 max_pages_per_wqe, max_log_mpwqe_size; + u8 max_pages_per_wqe, max_log_wqe_size_calc; + u8 max_log_wqe_size_cap; u16 max_wqe_size; /* Keep in sync with MLX5_MPWRQ_MAX_PAGES_PER_WQE. */ max_wqe_size = mlx5e_get_max_sq_aligned_wqebbs(mdev) * MLX5_SEND_WQE_BB; max_pages_per_wqe = ALIGN_DOWN(max_wqe_size - sizeof(struct mlx5e_umr_wqe), MLX5_UMR_FLEX_ALIGNMENT) / umr_entry_size; - max_log_mpwqe_size = ilog2(max_pages_per_wqe) + page_shift; + max_log_wqe_size_calc = ilog2(max_pages_per_wqe) + page_shift; + + WARN_ON_ONCE(max_log_wqe_size_calc < MLX5E_ORDER2_MAX_PACKET_MTU); - WARN_ON_ONCE(max_log_mpwqe_size < MLX5E_ORDER2_MAX_PACKET_MTU); + max_log_wqe_size_cap = mlx5_core_is_ecpf(mdev) ? + MLX5_REP_MPWRQ_MAX_LOG_WQE_SZ : MLX5_MPWRQ_MAX_LOG_WQE_SZ; - return min_t(u8, max_log_mpwqe_size, MLX5_MPWRQ_MAX_LOG_WQE_SZ); + return min_t(u8, max_log_wqe_size_calc, max_log_wqe_size_cap); } u8 mlx5e_mpwrq_pages_per_wqe(struct mlx5_core_dev *mdev, u8 page_shift, -- 2.39.5