From: Tariq Toukan <tariqt@nvidia.com>
To: Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
"Alexei Starovoitov" <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
"Jesper Dangaard Brouer" <hawk@kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
Richard Cochran <richardcochran@gmail.com>,
<netdev@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <bpf@vger.kernel.org>,
Gal Pressman <gal@nvidia.com>,
Dragos Tatulea <dtatulea@nvidia.com>,
Cosmin Ratiu <cratiu@nvidia.com>,
Pavel Begunkov <asml.silence@gmail.com>,
David Wei <dw@davidwei.uk>
Subject: [PATCH net-next 10/15] net/mlx5e: Alloc rq drop page based on calculated page_shift
Date: Mon, 23 Feb 2026 22:41:50 +0200 [thread overview]
Message-ID: <20260223204155.1783580-11-tariqt@nvidia.com> (raw)
In-Reply-To: <20260223204155.1783580-1-tariqt@nvidia.com>
From: Dragos Tatulea <dtatulea@nvidia.com>
An upcoming patch will allow setting the page order for RX
pages to be greater than 0. Make sure that the drop page will
also be allocated with the right size when that happens.
Take extra care when calculating the drop page size to
account for page_shift < PAGE_SHIFT which can happen for xsk.
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
.../net/ethernet/mellanox/mlx5/core/en_main.c | 27 ++++++++++++-------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 6344dbb6335e..2d3d89707246 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -636,14 +636,18 @@ static void mlx5e_rq_timeout_work(struct work_struct *timeout_work)
static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
{
- rq->wqe_overflow.page = alloc_page(GFP_KERNEL);
+ /* xsk can have page_shift < PAGE_SHIFT */
+ u16 page_order = max_t(s16, rq->mpwqe.page_shift - PAGE_SHIFT, 0);
+ u32 page_size = BIT(PAGE_SHIFT + page_order);
+
+ rq->wqe_overflow.page = alloc_pages(GFP_KERNEL, page_order);
if (!rq->wqe_overflow.page)
return -ENOMEM;
rq->wqe_overflow.addr = dma_map_page(rq->pdev, rq->wqe_overflow.page, 0,
- PAGE_SIZE, rq->buff.map_dir);
+ page_size, rq->buff.map_dir);
if (dma_mapping_error(rq->pdev, rq->wqe_overflow.addr)) {
- __free_page(rq->wqe_overflow.page);
+ __free_pages(rq->wqe_overflow.page, page_order);
return -ENOMEM;
}
return 0;
@@ -651,9 +655,12 @@ static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
{
- dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, PAGE_SIZE,
- rq->buff.map_dir);
- __free_page(rq->wqe_overflow.page);
+ u16 page_order = max_t(s16, rq->mpwqe.page_shift - PAGE_SHIFT, 0);
+ u32 page_size = BIT(PAGE_SHIFT + page_order);
+
+ dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, page_size,
+ rq->buff.map_dir);
+ __free_pages(rq->wqe_overflow.page, page_order);
}
static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *params,
@@ -884,15 +891,15 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
if (err)
goto err_rq_xdp_prog;
- err = mlx5e_alloc_mpwqe_rq_drop_page(rq);
- if (err)
- goto err_rq_wq_destroy;
-
rq->mpwqe.wq.db = &rq->mpwqe.wq.db[MLX5_RCV_DBR];
wq_sz = mlx5_wq_ll_get_size(&rq->mpwqe.wq);
rq->mpwqe.page_shift = mlx5e_mpwrq_page_shift(mdev, rqo);
+ err = mlx5e_alloc_mpwqe_rq_drop_page(rq);
+ if (err)
+ goto err_rq_wq_destroy;
+
rq->mpwqe.umr_mode = mlx5e_mpwrq_umr_mode(mdev, rqo);
rq->mpwqe.pages_per_wqe =
mlx5e_mpwrq_pages_per_wqe(mdev, rq->mpwqe.page_shift,
--
2.44.0
next prev parent reply other threads:[~2026-02-23 20:44 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-23 20:41 [PATCH net-next 00/15] net/mlx5e: SHAMPO, Allow high order pages in zerocopy mode Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 01/15] net/mlx5e: Make mlx5e_rq_param naming consistent Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 02/15] net/mlx5e: Extract striding rq param calculation in function Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 03/15] net/mlx5e: Extract max_xsk_wqebbs into its own function Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 04/15] net/mlx5e: Expose and rename xsk channel parameter function Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 05/15] net/mlx5e: Alloc xsk channel param out of mlx5e_open_xsk() Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 06/15] net/mlx5e: Move xsk param into new option container struct Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 07/15] net/mlx5e: Drop unused channel parameters Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 08/15] net/mlx5e: SHAMPO, Always calculate page size Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 09/15] net/mlx5e: Set page_pool order based on calculated page_shift Tariq Toukan
2026-02-23 20:41 ` Tariq Toukan [this message]
2026-02-23 20:41 ` [PATCH net-next 11/15] net/mlx5e: RX, Make page frag bias more robust Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 12/15] net/mlx5e: Add queue config ops for page size Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 13/15] net/mlx5e: Pass netdev queue config to param calculations Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 14/15] net/mlx5e: Add param helper to calculate max page size Tariq Toukan
2026-02-23 20:41 ` [PATCH net-next 15/15] net/mlx5e: SHAMPO, Allow high order pages in zerocopy mode Tariq Toukan
2026-02-26 10:10 ` [PATCH net-next 00/15] " patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260223204155.1783580-11-tariqt@nvidia.com \
--to=tariqt@nvidia.com \
--cc=andrew+netdev@lunn.ch \
--cc=asml.silence@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cratiu@nvidia.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=dw@davidwei.uk \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mbloch@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=richardcochran@gmail.com \
--cc=saeedm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox