From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
netdev@vger.kernel.org, Tariq Toukan <tariqt@nvidia.com>,
Maxim Mikityanskiy <maximmi@nvidia.com>
Subject: [PATCH net-next 13/16] net/mlx5e: Optimize RQ page deallocation
Date: Fri, 30 Sep 2022 09:29:00 -0700 [thread overview]
Message-ID: <20220930162903.62262-14-saeed@kernel.org> (raw)
In-Reply-To: <20220930162903.62262-1-saeed@kernel.org>
From: Maxim Mikityanskiy <maximmi@nvidia.com>
mlx5e_free_rx_mpwqe loops over all pages of a MPWQE, calling
mlx5e_page_release for ones that are not scheduled for XDP_TX or
XDP_REDIRECT; and mlx5e_page_release checks whether it's an XSK RQ or a
regular one for each page/XSK frame. This check can be moved outside the
loop to reduce the number of branches.
mlx5e_free_rx_wqe loops over all fragments, calling mlx5e_page_release
for the ones that are last in a page; and mlx5e_page_release checks
whether it's an XSK RQ or a regular one for each fragment. Using the
fact that XSK doesn't support multiple fragments, it can be optimized
for both XSK and regular usages:
1. Make an early check for XSK and call its deallocator directly, saving
3 branches (loop condition, frag->last_in_page and selection of
deallocator).
2. Call the regular deallocator directly in the non-XSK case, saving a
branch per fragment, except the first one.
After the changes, mlx5e_page_release is removed, as there are no
callers left.
Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
.../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 2 +-
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 41 +++++++++++--------
2 files changed, 24 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
index 7bd49f0b1271..661d2d5748f4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
@@ -253,7 +253,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
return NULL; /* page/packet was consumed by XDP */
/* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
- * will be handled by mlx5e_put_rx_frag.
+ * will be handled by mlx5e_free_rx_wqe.
* On SKB allocation failure, NULL is returned.
*/
return mlx5e_xsk_construct_skb(rq, xdp->data, xdp->data_end - xdp->data);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index d0db6a66cb46..36eda4c958a0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -317,20 +317,6 @@ void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool rec
}
}
-static inline void mlx5e_page_release(struct mlx5e_rq *rq,
- union mlx5e_alloc_unit *au,
- bool recycle)
-{
- if (rq->xsk_pool)
- /* The `recycle` parameter is ignored, and the page is always
- * put into the Reuse Ring, because there is no way to return
- * the page to the userspace when the interface goes down.
- */
- xsk_buff_free(au->xsk);
- else
- mlx5e_page_release_dynamic(rq, au->page, recycle);
-}
-
static inline int mlx5e_get_rx_frag(struct mlx5e_rq *rq,
struct mlx5e_wqe_frag_info *frag)
{
@@ -352,7 +338,7 @@ static inline void mlx5e_put_rx_frag(struct mlx5e_rq *rq,
bool recycle)
{
if (frag->last_in_page)
- mlx5e_page_release(rq, frag->au, recycle);
+ mlx5e_page_release_dynamic(rq, frag->au->page, recycle);
}
static inline struct mlx5e_wqe_frag_info *get_frag(struct mlx5e_rq *rq, u16 ix)
@@ -395,6 +381,15 @@ static inline void mlx5e_free_rx_wqe(struct mlx5e_rq *rq,
{
int i;
+ if (rq->xsk_pool) {
+ /* The `recycle` parameter is ignored, and the page is always
+ * put into the Reuse Ring, because there is no way to return
+ * the page to the userspace when the interface goes down.
+ */
+ xsk_buff_free(wi->au->xsk);
+ return;
+ }
+
for (i = 0; i < rq->wqe.info.num_frags; i++, wi++)
mlx5e_put_rx_frag(rq, wi, recycle);
}
@@ -463,9 +458,19 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle
no_xdp_xmit = bitmap_empty(wi->xdp_xmit_bitmap, rq->mpwqe.pages_per_wqe);
- for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
- if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
- mlx5e_page_release(rq, &alloc_units[i], recycle);
+ if (rq->xsk_pool) {
+ /* The `recycle` parameter is ignored, and the page is always
+ * put into the Reuse Ring, because there is no way to return
+ * the page to the userspace when the interface goes down.
+ */
+ for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
+ if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
+ xsk_buff_free(alloc_units[i].xsk);
+ } else {
+ for (i = 0; i < rq->mpwqe.pages_per_wqe; i++)
+ if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap))
+ mlx5e_page_release_dynamic(rq, alloc_units[i].page, recycle);
+ }
}
static void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq, u8 n)
--
2.37.3
next prev parent reply other threads:[~2022-09-30 16:30 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-30 16:28 [PATCH net-next 00/16] mlx5 xsk updates part3 2022-09-30 Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 01/16] net/mlx5e: xsk: Use mlx5e_trigger_napi_icosq for XSK wakeup Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 02/16] net/mlx5e: xsk: Drop the check for XSK state in mlx5e_xsk_wakeup Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 03/16] net/mlx5e: Introduce wqe_index_mask for legacy RQ Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 04/16] net/mlx5e: Make the wqe_index_mask calculation more exact Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 05/16] net/mlx5e: Use partial batches in legacy RQ Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 06/16] net/mlx5e: xsk: Use partial batches in legacy RQ with XSK Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 07/16] net/mlx5e: Remove the outer loop when allocating legacy RQ WQEs Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 08/16] net/mlx5e: xsk: Split out WQE allocation for legacy XSK RQ Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 09/16] net/mlx5e: xsk: Use xsk_buff_alloc_batch on legacy RQ Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 10/16] net/mlx5e: xsk: Use xsk_buff_alloc_batch on striding RQ Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 11/16] net/mlx5e: Use non-XSK page allocator in SHAMPO Saeed Mahameed
2022-09-30 16:28 ` [PATCH net-next 12/16] net/mlx5e: Call mlx5e_page_release_dynamic directly where possible Saeed Mahameed
2022-09-30 16:29 ` Saeed Mahameed [this message]
2022-09-30 16:29 ` [PATCH net-next 14/16] net/mlx5e: xsk: Support XDP metadata on XSK RQs Saeed Mahameed
2022-09-30 16:29 ` [PATCH net-next 15/16] net/mlx5e: Introduce the mlx5e_flush_rq function Saeed Mahameed
2022-09-30 16:29 ` [PATCH net-next 16/16] net/mlx5e: xsk: Use queue indices starting from 0 for XSK queues Saeed Mahameed
2022-10-01 20:40 ` [PATCH net-next 00/16] mlx5 xsk updates part3 2022-09-30 patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220930162903.62262-14-saeed@kernel.org \
--to=saeed@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=maximmi@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).