From: Tariq Toukan <ttoukan.linux@gmail.com>
To: Leon Hwang <leon.hwang@linux.dev>, netdev@vger.kernel.org
Cc: Saeed Mahameed <saeedm@nvidia.com>,
Tariq Toukan <tariqt@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S . Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Oz Shlomo <ozsh@mellanox.com>, Paul Blakey <paulb@mellanox.com>,
Khalid Manaa <khalidm@nvidia.com>,
Achiad Shochat <achiad@mellanox.com>,
Jiayuan Chen <jiayuan.chen@linux.dev>,
linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org,
Leon Huang Fu <leon.huangfu@shopee.com>
Subject: Re: [PATCH net-next] net/mlx5e: Mask wqe_id when handling rx cqe
Date: Wed, 14 Jan 2026 10:23:21 +0200 [thread overview]
Message-ID: <cfa6e78d-82ca-43d2-a8df-48fcb7d6301e@gmail.com> (raw)
In-Reply-To: <20260112080323.65456-1-leon.hwang@linux.dev>
On 12/01/2026 10:03, Leon Hwang wrote:
> The wqe_id from CQE contains wrap counter bits in addition to the WQE
> index. Mask it with sz_m1 to prevent out-of-bounds access to the
> rq->mpwqe.info[] array when wrap counter causes wqe_id to exceed RQ size.
>
> Without this fix, the driver crashes with NULL pointer dereference:
>
> BUG: kernel NULL pointer dereference, address: 0000000000000020
> RIP: 0010:mlx5e_skb_from_cqe_mpwrq_linear+0xb3/0x280 [mlx5_core]
> Call Trace:
> <IRQ>
> mlx5e_handle_rx_cqe_mpwrq+0xe3/0x290 [mlx5_core]
> mlx5e_poll_rx_cq+0x97/0x820 [mlx5_core]
> mlx5e_napi_poll+0x110/0x820 [mlx5_core]
>
Hi,
We do not expect out-of-bounds index, fixing it this way is not
necessarily correct.
Can you please elaborate on your test case, setup, and how to repro?
> Fixes: dfd9e7500cd4 ("net/mlx5e: Rx, Split rep rx mpwqe handler from nic")
> Fixes: f97d5c2a453e ("net/mlx5e: Add handle SHAMPO cqe support")
> Fixes: 461017cb006a ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
> Signed-off-by: Leon Huang Fu <leon.huangfu@shopee.com>
> Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h | 5 +++++
> drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 6 +++---
> 2 files changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
> index 7e191e1569e8..df8e671d5115 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
> @@ -583,4 +583,9 @@ static inline struct mlx5e_mpw_info *mlx5e_get_mpw_info(struct mlx5e_rq *rq, int
>
> return (struct mlx5e_mpw_info *)((char *)rq->mpwqe.info + array_size(i, isz));
> }
> +
> +static inline u16 mlx5e_rq_cqe_wqe_id(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
> +{
> + return be16_to_cpu(cqe->wqe_id) & rq->mpwqe.wq.fbc.sz_m1;
> +}
> #endif
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> index 1f6930c77437..25c04684271c 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> @@ -1957,7 +1957,7 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
> static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
> {
> u16 cstrides = mpwrq_get_cqe_consumed_strides(cqe);
> - u16 wqe_id = be16_to_cpu(cqe->wqe_id);
> + u16 wqe_id = mlx5e_rq_cqe_wqe_id(rq, cqe);
> struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, wqe_id);
> u16 stride_ix = mpwrq_get_cqe_stride_index(cqe);
> u32 wqe_offset = stride_ix << rq->mpwqe.log_stride_sz;
> @@ -2373,7 +2373,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
> u16 cstrides = mpwrq_get_cqe_consumed_strides(cqe);
> u32 data_offset = wqe_offset & (PAGE_SIZE - 1);
> u32 cqe_bcnt = mpwrq_get_cqe_byte_cnt(cqe);
> - u16 wqe_id = be16_to_cpu(cqe->wqe_id);
> + u16 wqe_id = mlx5e_rq_cqe_wqe_id(rq, cqe);
> u32 page_idx = wqe_offset >> PAGE_SHIFT;
> u16 head_size = cqe->shampo.header_size;
> struct sk_buff **skb = &rq->hw_gro_data->skb;
> @@ -2478,7 +2478,7 @@ static void mlx5e_handle_rx_cqe_mpwrq_shampo(struct mlx5e_rq *rq, struct mlx5_cq
> static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
> {
> u16 cstrides = mpwrq_get_cqe_consumed_strides(cqe);
> - u16 wqe_id = be16_to_cpu(cqe->wqe_id);
> + u16 wqe_id = mlx5e_rq_cqe_wqe_id(rq, cqe);
> struct mlx5e_mpw_info *wi = mlx5e_get_mpw_info(rq, wqe_id);
> u16 stride_ix = mpwrq_get_cqe_stride_index(cqe);
> u32 wqe_offset = stride_ix << rq->mpwqe.log_stride_sz;
next prev parent reply other threads:[~2026-01-14 8:23 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-12 8:03 [PATCH net-next] net/mlx5e: Mask wqe_id when handling rx cqe Leon Hwang
2026-01-14 8:23 ` Tariq Toukan [this message]
2026-01-14 8:53 ` Leon Hwang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cfa6e78d-82ca-43d2-a8df-48fcb7d6301e@gmail.com \
--to=ttoukan.linux@gmail.com \
--cc=achiad@mellanox.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=jiayuan.chen@linux.dev \
--cc=khalidm@nvidia.com \
--cc=kuba@kernel.org \
--cc=leon.huangfu@shopee.com \
--cc=leon.hwang@linux.dev \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mbloch@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=ozsh@mellanox.com \
--cc=pabeni@redhat.com \
--cc=paulb@mellanox.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox