netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Saeed Mahameed <saeed@kernel.org>
To: "David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, Maxim Mikityanskiy <maximmi@nvidia.com>,
	Saeed Mahameed <saeedm@nvidia.com>
Subject: [net-next 06/15] net/mlx5e: Drop cqe_bcnt32 from mlx5e_skb_from_cqe_mpwrq_linear
Date: Thu, 17 Mar 2022 11:54:15 -0700	[thread overview]
Message-ID: <20220317185424.287982-7-saeed@kernel.org> (raw)
In-Reply-To: <20220317185424.287982-1-saeed@kernel.org>

From: Maxim Mikityanskiy <maximmi@nvidia.com>

The packet size in mlx5e_skb_from_cqe_mpwrq_linear can't overflow u16,
since the maximum packet size in linear striding RQ is 2^13 bytes. Drop
the unneeded u32 variable.

Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 7c490c0ca370..4b8699f39200 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1848,7 +1848,6 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
 {
 	struct mlx5e_dma_info *di = &wi->umr.dma_info[page_idx];
 	u16 rx_headroom = rq->buff.headroom;
-	u32 cqe_bcnt32 = cqe_bcnt;
 	struct bpf_prog *prog;
 	struct sk_buff *skb;
 	u32 metasize = 0;
@@ -1863,7 +1862,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
 
 	va             = page_address(di->page) + head_offset;
 	data           = va + rx_headroom;
-	frag_size      = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt32);
+	frag_size      = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
 
 	dma_sync_single_range_for_cpu(rq->pdev, di->addr, head_offset,
 				      frag_size, DMA_FROM_DEVICE);
@@ -1874,7 +1873,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
 		struct xdp_buff xdp;
 
 		net_prefetchw(va); /* xdp_frame data area */
-		mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp);
+		mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
 		if (mlx5e_xdp_handle(rq, di, prog, &xdp)) {
 			if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
 				__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
@@ -1883,10 +1882,10 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
 
 		rx_headroom = xdp.data - xdp.data_hard_start;
 		metasize = xdp.data - xdp.data_meta;
-		cqe_bcnt32 = xdp.data_end - xdp.data;
+		cqe_bcnt = xdp.data_end - xdp.data;
 	}
-	frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt32);
-	skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt32, metasize);
+	frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
+	skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize);
 	if (unlikely(!skb))
 		return NULL;
 
-- 
2.35.1


  parent reply	other threads:[~2022-03-17 18:54 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-17 18:54 [pull request][net-next 00/15] mlx5 updates 2022-03-17 Saeed Mahameed
2022-03-17 18:54 ` [net-next 01/15] net/mlx5e: Validate MTU when building non-linear legacy RQ fragments info Saeed Mahameed
2022-03-18 11:00   ` patchwork-bot+netdevbpf
2022-03-17 18:54 ` [net-next 02/15] net/mlx5e: Add headroom only to the first fragment in legacy RQ Saeed Mahameed
2022-03-17 18:54 ` [net-next 03/15] net/mlx5e: Build SKB in place over the first fragment in non-linear " Saeed Mahameed
2022-03-17 18:54 ` [net-next 04/15] net/mlx5e: RX, Test the XDP program existence out of the handler Saeed Mahameed
2022-03-17 18:54 ` [net-next 05/15] net/mlx5e: Drop the len output parameter from mlx5e_xdp_handle Saeed Mahameed
2022-03-17 18:54 ` Saeed Mahameed [this message]
2022-03-17 18:54 ` [net-next 07/15] net/mlx5: DR, Adjust structure member to reduce memory hole Saeed Mahameed
2022-03-17 18:54 ` [net-next 08/15] net/mlx5: DR, Remove mr_addr rkey from struct mlx5dr_icm_chunk Saeed Mahameed
2022-03-17 18:54 ` [net-next 09/15] net/mlx5: DR, Remove icm_addr from mlx5dr_icm_chunk to reduce memory Saeed Mahameed
2022-03-17 18:54 ` [net-next 10/15] net/mlx5: DR, Remove num_of_entries byte_size from struct mlx5_dr_icm_chunk Saeed Mahameed
2022-03-17 18:54 ` [net-next 11/15] net/mlx5: DR, Remove 4 members from mlx5dr_ste_htbl to reduce memory Saeed Mahameed
2022-03-17 18:54 ` [net-next 12/15] net/mlx5: DR, Remove hw_ste from mlx5dr_ste " Saeed Mahameed
2022-03-17 18:54 ` [net-next 13/15] net/mlx5: CT: Remove extra rhashtable remove on tuple entries Saeed Mahameed
2022-03-17 18:54 ` [net-next 14/15] net/mlx5: Remove unused exported contiguous coherent buffer allocation API Saeed Mahameed
2022-03-17 18:54 ` [net-next 15/15] net/mlx5: Remove unused fill page array API function Saeed Mahameed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220317185424.287982-7-saeed@kernel.org \
    --to=saeed@kernel.org \
    --cc=davem@davemloft.net \
    --cc=kuba@kernel.org \
    --cc=maximmi@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).