netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mark Bloch <mbloch@nvidia.com>
To: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Eric Dumazet <edumazet@google.com>,
	"Andrew Lunn" <andrew+netdev@lunn.ch>
Cc: <saeedm@nvidia.com>, <gal@nvidia.com>, <leonro@nvidia.com>,
	<tariqt@nvidia.com>, Leon Romanovsky <leon@kernel.org>,
	<netdev@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>,
	Dragos Tatulea <dtatulea@nvidia.com>,
	"Cosmin Ratiu" <cratiu@nvidia.com>,
	Mark Bloch <mbloch@nvidia.com>
Subject: [PATCH net-next v4 08/11] net/mlx5e: Add support for UNREADABLE netmem page pools
Date: Tue, 10 Jun 2025 18:09:47 +0300	[thread overview]
Message-ID: <20250610150950.1094376-9-mbloch@nvidia.com> (raw)
In-Reply-To: <20250610150950.1094376-1-mbloch@nvidia.com>

From: Saeed Mahameed <saeedm@nvidia.com>

On netdev_rx_queue_restart, a special type of page pool maybe expected.

In this patch declare support for UNREADABLE netmem iov pages in the
pool params only when header data split shampo RQ mode is enabled, also
set the queue index in the page pool params struct.

Shampo mode requirement: Without header split rx needs to peek at the data,
we can't do UNREADABLE_NETMEM.

The patch also enables the use of a separate page pool for headers when
a memory provider is installed for the queue, otherwise the same common
page pool continues to be used.

Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Mark Bloch <mbloch@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 5e649705e35f..a51e204bd364 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
 
 static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
 {
-	return false;
+	struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
+
+	return !!rxq->mp_params.mp_ops;
 }
 
 static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
@@ -964,6 +966,11 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
 		pp_params.netdev    = rq->netdev;
 		pp_params.dma_dir   = rq->buff.map_dir;
 		pp_params.max_len   = PAGE_SIZE;
+		pp_params.queue_idx = rq->ix;
+
+		/* Shampo header data split allow for unreadable netmem */
+		if (test_bit(MLX5E_RQ_STATE_SHAMPO, &rq->state))
+			pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
 
 		/* page_pool can be used even when there is no rq->xdp_prog,
 		 * given page_pool does not handle DMA mapping there is no
-- 
2.34.1


  parent reply	other threads:[~2025-06-10 15:11 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-10 15:09 [PATCH net-next v4 00/11] net/mlx5e: Add support for devmem and io_uring TCP zero-copy Mark Bloch
2025-06-10 15:09 ` [PATCH net-next v4 01/11] net: Allow const args for of page_to_netmem() Mark Bloch
2025-06-12  4:52   ` Mina Almasry
2025-06-12  8:06     ` [PATCH net-next v4 01/11] net: Allow const args for of page_to_netmem()y Dragos Tatulea
2025-06-10 15:09 ` [PATCH net-next v4 02/11] net: Add skb_can_coalesce for netmem Mark Bloch
2025-06-12  4:53   ` Mina Almasry
2025-06-10 15:09 ` [PATCH net-next v4 03/11] net/mlx5e: SHAMPO: Reorganize mlx5_rq_shampo_alloc Mark Bloch
2025-06-10 15:09 ` [PATCH net-next v4 04/11] net/mlx5e: SHAMPO: Remove redundant params Mark Bloch
2025-06-10 15:09 ` [PATCH net-next v4 05/11] net/mlx5e: SHAMPO: Improve hw gro capability checking Mark Bloch
2025-06-10 15:09 ` [PATCH net-next v4 06/11] net/mlx5e: SHAMPO: Separate pool for headers Mark Bloch
2025-06-10 15:09 ` [PATCH net-next v4 07/11] net/mlx5e: Convert over to netmem Mark Bloch
2025-06-12  5:11   ` Mina Almasry
2025-06-12  8:19     ` Dragos Tatulea
2025-06-10 15:09 ` Mark Bloch [this message]
2025-06-12  5:16   ` [PATCH net-next v4 08/11] net/mlx5e: Add support for UNREADABLE netmem page pools Mina Almasry
2025-06-12  8:46     ` Dragos Tatulea
2025-06-12 20:47       ` Mina Almasry
2025-06-10 15:09 ` [PATCH net-next v4 09/11] net/mlx5e: Implement queue mgmt ops and single channel swap Mark Bloch
2025-06-11 13:26   ` Jakub Kicinski
2025-06-10 15:09 ` [PATCH net-next v4 10/11] net/mlx5e: Support ethtool tcp-data-split settings Mark Bloch
2025-06-11 13:26   ` Jakub Kicinski
2025-06-10 15:09 ` [PATCH net-next v4 11/11] net/mlx5e: Add TX support for netmems Mark Bloch
2025-06-12  5:17   ` Mina Almasry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250610150950.1094376-9-mbloch@nvidia.com \
    --to=mbloch@nvidia.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=cratiu@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=dtatulea@nvidia.com \
    --cc=edumazet@google.com \
    --cc=gal@nvidia.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).