From: Hariprasad Kelam <hkelam@marvell.com>
To: Zijian Zhang <zijianzhang@bytedance.com>
Cc: <netdev@vger.kernel.org>, <davem@davemloft.net>,
<kuba@kernel.org>, <pabeni@redhat.com>, <edumazet@google.com>,
<andrew+netdev@lunn.ch>, <saeedm@nvidia.com>, <gal@nvidia.com>,
<leonro@nvidia.com>, <witu@nvidia.com>, <parav@nvidia.com>,
<tariqt@nvidia.com>
Subject: Re: [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
Date: Fri, 31 Oct 2025 12:26:30 +0530 [thread overview]
Message-ID: <aQRdnj99v64Ozvj/@test-OptiPlex-Tower-Plus-7010> (raw)
In-Reply-To: <e25c6c0c-1e2a-48c2-9606-5f51f36afbf0@bytedance.com>
On 2025-10-31 at 06:12:50, Zijian Zhang (zijianzhang@bytedance.com) wrote:
> From: Zijian Zhang <zijianzhang@bytedance.com>
>
> When performing XDP_REDIRECT from one mlnx device to another, using
> smp_processor_id() to select the queue may go out-of-range.
>
> Assume eth0 is redirecting a packet to eth1, eth1 is configured
> with only 8 channels, while eth0 has its RX queues pinned to
> higher-numbered CPUs (e.g. CPU 12). When a packet is received on
> such a CPU and redirected to eth1, the driver uses smp_processor_id()
> as the SQ index. Since the CPU ID is larger than the number of queues
> on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
> the redirect fails.
>
> This patch fixes the issue by mapping the CPU ID to a valid channel
> index using modulo arithmetic:
>
> sq_num = smp_processor_id() % priv->channels.num;
>
> With this change, XDP_REDIRECT works correctly even when the source
> device uses high CPU affinities and the target device has fewer TX
> queues.
>
> Signed-off-by: Zijian Zhang <zijianzhang@bytedance.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> index 5d51600935a6..61394257c65f 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> @@ -855,11 +855,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n,
> struct xdp_frame **frames,
> if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
> return -EINVAL;
>
> - sq_num = smp_processor_id();
> -
> - if (unlikely(sq_num >= priv->channels.num))
> - return -ENXIO;
> -
> + sq_num = smp_processor_id() % priv->channels.num;
> sq = priv->channels.c[sq_num]->xdpsq;
>
> for (i = 0; i < n; i++) {
Reviewed-by: Hariprasad Kelam <hkelam@marvell.com>
next prev parent reply other threads:[~2025-10-31 6:57 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-31 0:42 [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
2025-10-31 6:56 ` Hariprasad Kelam [this message]
2025-10-31 18:40 ` Jakub Kicinski
2025-10-31 19:54 ` Zijian Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aQRdnj99v64Ozvj/@test-OptiPlex-Tower-Plus-7010 \
--to=hkelam@marvell.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=kuba@kernel.org \
--cc=leonro@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=parav@nvidia.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=witu@nvidia.com \
--cc=zijianzhang@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox