* [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
@ 2025-10-31 0:42 Zijian Zhang
2025-10-31 6:56 ` Hariprasad Kelam
2025-10-31 18:40 ` Jakub Kicinski
0 siblings, 2 replies; 4+ messages in thread
From: Zijian Zhang @ 2025-10-31 0:42 UTC (permalink / raw)
To: netdev
Cc: davem, kuba, pabeni, edumazet, andrew+netdev, saeedm, gal, leonro,
witu, parav, tariqt
From: Zijian Zhang <zijianzhang@bytedance.com>
When performing XDP_REDIRECT from one mlnx device to another, using
smp_processor_id() to select the queue may go out-of-range.
Assume eth0 is redirecting a packet to eth1, eth1 is configured
with only 8 channels, while eth0 has its RX queues pinned to
higher-numbered CPUs (e.g. CPU 12). When a packet is received on
such a CPU and redirected to eth1, the driver uses smp_processor_id()
as the SQ index. Since the CPU ID is larger than the number of queues
on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
the redirect fails.
This patch fixes the issue by mapping the CPU ID to a valid channel
index using modulo arithmetic:
sq_num = smp_processor_id() % priv->channels.num;
With this change, XDP_REDIRECT works correctly even when the source
device uses high CPU affinities and the target device has fewer TX
queues.
Signed-off-by: Zijian Zhang <zijianzhang@bytedance.com>
---
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 5d51600935a6..61394257c65f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -855,11 +855,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n,
struct xdp_frame **frames,
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
return -EINVAL;
- sq_num = smp_processor_id();
-
- if (unlikely(sq_num >= priv->channels.num))
- return -ENXIO;
-
+ sq_num = smp_processor_id() % priv->channels.num;
sq = priv->channels.c[sq_num]->xdpsq;
for (i = 0; i < n; i++) {
--
2.20.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
2025-10-31 0:42 [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
@ 2025-10-31 6:56 ` Hariprasad Kelam
2025-10-31 18:40 ` Jakub Kicinski
1 sibling, 0 replies; 4+ messages in thread
From: Hariprasad Kelam @ 2025-10-31 6:56 UTC (permalink / raw)
To: Zijian Zhang
Cc: netdev, davem, kuba, pabeni, edumazet, andrew+netdev, saeedm, gal,
leonro, witu, parav, tariqt
On 2025-10-31 at 06:12:50, Zijian Zhang (zijianzhang@bytedance.com) wrote:
> From: Zijian Zhang <zijianzhang@bytedance.com>
>
> When performing XDP_REDIRECT from one mlnx device to another, using
> smp_processor_id() to select the queue may go out-of-range.
>
> Assume eth0 is redirecting a packet to eth1, eth1 is configured
> with only 8 channels, while eth0 has its RX queues pinned to
> higher-numbered CPUs (e.g. CPU 12). When a packet is received on
> such a CPU and redirected to eth1, the driver uses smp_processor_id()
> as the SQ index. Since the CPU ID is larger than the number of queues
> on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
> the redirect fails.
>
> This patch fixes the issue by mapping the CPU ID to a valid channel
> index using modulo arithmetic:
>
> sq_num = smp_processor_id() % priv->channels.num;
>
> With this change, XDP_REDIRECT works correctly even when the source
> device uses high CPU affinities and the target device has fewer TX
> queues.
>
> Signed-off-by: Zijian Zhang <zijianzhang@bytedance.com>
> ---
> drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> index 5d51600935a6..61394257c65f 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> @@ -855,11 +855,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n,
> struct xdp_frame **frames,
> if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
> return -EINVAL;
>
> - sq_num = smp_processor_id();
> -
> - if (unlikely(sq_num >= priv->channels.num))
> - return -ENXIO;
> -
> + sq_num = smp_processor_id() % priv->channels.num;
> sq = priv->channels.c[sq_num]->xdpsq;
>
> for (i = 0; i < n; i++) {
Reviewed-by: Hariprasad Kelam <hkelam@marvell.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
2025-10-31 0:42 [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
2025-10-31 6:56 ` Hariprasad Kelam
@ 2025-10-31 18:40 ` Jakub Kicinski
2025-10-31 19:54 ` Zijian Zhang
1 sibling, 1 reply; 4+ messages in thread
From: Jakub Kicinski @ 2025-10-31 18:40 UTC (permalink / raw)
To: Zijian Zhang
Cc: netdev, davem, pabeni, edumazet, andrew+netdev, saeedm, gal,
leonro, witu, parav, tariqt
On Thu, 30 Oct 2025 17:42:50 -0700 Zijian Zhang wrote:
> When performing XDP_REDIRECT from one mlnx device to another, using
> smp_processor_id() to select the queue may go out-of-range.
>
> Assume eth0 is redirecting a packet to eth1, eth1 is configured
> with only 8 channels, while eth0 has its RX queues pinned to
> higher-numbered CPUs (e.g. CPU 12). When a packet is received on
> such a CPU and redirected to eth1, the driver uses smp_processor_id()
> as the SQ index. Since the CPU ID is larger than the number of queues
> on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
> the redirect fails.
>
> This patch fixes the issue by mapping the CPU ID to a valid channel
> index using modulo arithmetic:
>
> sq_num = smp_processor_id() % priv->channels.num;
>
> With this change, XDP_REDIRECT works correctly even when the source
> device uses high CPU affinities and the target device has fewer TX
> queues.
And what if you have 8 queues and CPUs 0 and 8 try to Xmit at the same
time? Is there any locking here?
--
pw-bot: cr
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
2025-10-31 18:40 ` Jakub Kicinski
@ 2025-10-31 19:54 ` Zijian Zhang
0 siblings, 0 replies; 4+ messages in thread
From: Zijian Zhang @ 2025-10-31 19:54 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, davem, pabeni, edumazet, andrew+netdev, saeedm, gal,
leonro, witu, parav, tariqt
On 10/31/25 11:40 AM, Jakub Kicinski wrote:
> On Thu, 30 Oct 2025 17:42:50 -0700 Zijian Zhang wrote:
>> When performing XDP_REDIRECT from one mlnx device to another, using
>> smp_processor_id() to select the queue may go out-of-range.
>>
>> Assume eth0 is redirecting a packet to eth1, eth1 is configured
>> with only 8 channels, while eth0 has its RX queues pinned to
>> higher-numbered CPUs (e.g. CPU 12). When a packet is received on
>> such a CPU and redirected to eth1, the driver uses smp_processor_id()
>> as the SQ index. Since the CPU ID is larger than the number of queues
>> on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
>> the redirect fails.
>>
>> This patch fixes the issue by mapping the CPU ID to a valid channel
>> index using modulo arithmetic:
>>
>> sq_num = smp_processor_id() % priv->channels.num;
>>
>> With this change, XDP_REDIRECT works correctly even when the source
>> device uses high CPU affinities and the target device has fewer TX
>> queues.
>
> And what if you have 8 queues and CPUs 0 and 8 try to Xmit at the same
> time? Is there any locking here?
Thanks for pointing this out, I will add a lock here.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-10-31 19:54 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-31 0:42 [PATCH net-next] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
2025-10-31 6:56 ` Hariprasad Kelam
2025-10-31 18:40 ` Jakub Kicinski
2025-10-31 19:54 ` Zijian Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).