netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
@ 2025-10-31 23:10 Zijian Zhang
  2025-10-31 23:31 ` Jakub Kicinski
  2025-11-02 13:02 ` Tariq Toukan
  0 siblings, 2 replies; 5+ messages in thread
From: Zijian Zhang @ 2025-10-31 23:10 UTC (permalink / raw)
  To: netdev
  Cc: davem, kuba, pabeni, edumazet, andrew+netdev, saeedm, gal, leonro,
	witu, parav, tariqt, hkelam, Zijian Zhang

When performing XDP_REDIRECT from one mlnx device to another, using
smp_processor_id() to select the queue may go out-of-range.

Assume eth0 is redirecting a packet to eth1, eth1 is configured
with only 8 channels, while eth0 has its RX queues pinned to
higher-numbered CPUs (e.g. CPU 12). When a packet is received on
such a CPU and redirected to eth1, the driver uses smp_processor_id()
as the SQ index. Since the CPU ID is larger than the number of queues
on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
the redirect fails.

This patch fixes the issue by mapping the CPU ID to a valid channel
index using modulo arithmetic.

    sq_num = smp_processor_id() % priv->channels.num;

With this change, XDP_REDIRECT works correctly even when the source
device uses high CPU affinities and the target device has fewer TX
queues.

v2:
Suggested by Jakub Kicinski, I add a lock to synchronize TX when
xdp redirects packets on the same queue.

Signed-off-by: Zijian Zhang <zijianzhang@bytedance.com>
Reviewed-by: Hariprasad Kelam <hkelam@marvell.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en.h      | 3 +++
 drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c  | 8 +++-----
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 14e3207b14e7..2281154442d9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -516,6 +516,9 @@ struct mlx5e_xdpsq {
 	/* control path */
 	struct mlx5_wq_ctrl        wq_ctrl;
 	struct mlx5e_channel      *channel;
+
+	/* synchronize simultaneous xdp_xmit on the same ring */
+	spinlock_t                 xdp_tx_lock;
 } ____cacheline_aligned_in_smp;
 
 struct mlx5e_xdp_buff {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 5d51600935a6..6225734b256a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -855,13 +855,10 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
 		return -EINVAL;
 
-	sq_num = smp_processor_id();
-
-	if (unlikely(sq_num >= priv->channels.num))
-		return -ENXIO;
-
+	sq_num = smp_processor_id() % priv->channels.num;
 	sq = priv->channels.c[sq_num]->xdpsq;
 
+	spin_lock(&sq->xdp_tx_lock);
 	for (i = 0; i < n; i++) {
 		struct mlx5e_xmit_data_frags xdptxdf = {};
 		struct xdp_frame *xdpf = frames[i];
@@ -942,6 +939,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
 	if (flags & XDP_XMIT_FLUSH)
 		mlx5e_xmit_xdp_doorbell(sq);
 
+	spin_unlock(&sq->xdp_tx_lock);
 	return nxmit;
 }
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 9c46511e7b43..ced9eefe38aa 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1559,6 +1559,8 @@ static int mlx5e_alloc_xdpsq(struct mlx5e_channel *c,
 	if (err)
 		goto err_sq_wq_destroy;
 
+	spin_lock_init(&sq->xdp_tx_lock);
+
 	return 0;
 
 err_sq_wq_destroy:
-- 
2.37.1 (Apple Git-137.1)


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
  2025-10-31 23:10 [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
@ 2025-10-31 23:31 ` Jakub Kicinski
  2025-11-02 13:02 ` Tariq Toukan
  1 sibling, 0 replies; 5+ messages in thread
From: Jakub Kicinski @ 2025-10-31 23:31 UTC (permalink / raw)
  To: Zijian Zhang
  Cc: netdev, davem, pabeni, edumazet, andrew+netdev, saeedm, gal,
	leonro, witu, parav, tariqt, hkelam

On Fri, 31 Oct 2025 16:10:38 -0700 Zijian Zhang wrote:
> v2:
> Suggested by Jakub Kicinski, I add a lock to synchronize TX when
> xdp redirects packets on the same queue.

Oh, I definitely did not suggest that you add a lock :)
I just told you that there isn't one.
I'll let other people tell you why this is likely unacceptable.

From me just a process note - please obey our mailing list guidelines:
https://www.kernel.org/doc/html/next/process/maintainer-netdev.html
-- 
pw-bot: au

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
  2025-10-31 23:10 [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
  2025-10-31 23:31 ` Jakub Kicinski
@ 2025-11-02 13:02 ` Tariq Toukan
  2025-11-03  0:13   ` Alexander Duyck
  1 sibling, 1 reply; 5+ messages in thread
From: Tariq Toukan @ 2025-11-02 13:02 UTC (permalink / raw)
  To: Zijian Zhang, netdev
  Cc: davem, kuba, pabeni, edumazet, andrew+netdev, saeedm, gal, leonro,
	witu, parav, tariqt, hkelam, Alexander Lobakin,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Lorenzo Bianconi, Jesse Brandeburg, Alexander Duyck, Salil Mehta



On 01/11/2025 1:10, Zijian Zhang wrote:
> When performing XDP_REDIRECT from one mlnx device to another, using
> smp_processor_id() to select the queue may go out-of-range.
> 
> Assume eth0 is redirecting a packet to eth1, eth1 is configured
> with only 8 channels, while eth0 has its RX queues pinned to
> higher-numbered CPUs (e.g. CPU 12). When a packet is received on
> such a CPU and redirected to eth1, the driver uses smp_processor_id()
> as the SQ index. Since the CPU ID is larger than the number of queues
> on eth1, the lookup (priv->channels.c[sq_num]) goes out of range and
> the redirect fails.
> 
> This patch fixes the issue by mapping the CPU ID to a valid channel
> index using modulo arithmetic.
> 
>      sq_num = smp_processor_id() % priv->channels.num;
> 
> With this change, XDP_REDIRECT works correctly even when the source
> device uses high CPU affinities and the target device has fewer TX
> queues.
> 

++

This was indeed an open issue in XDP_REDIRECT. It was discussed in 
multiple ML threads and conferences, with Jesper and friends, including 
in https://netdevconf.info/0x15/session.html?XDP-General-Workshop.

I am not aware of a clear conclusion, seems that things were left for 
the vendor drivers to decide.

Current code keeps things super fast. But I understand the limitation, 
especially on moderns systems with a large number of cpus.

> v2:
> Suggested by Jakub Kicinski, I add a lock to synchronize TX when
> xdp redirects packets on the same queue.
> 
> Signed-off-by: Zijian Zhang <zijianzhang@bytedance.com>
> Reviewed-by: Hariprasad Kelam <hkelam@marvell.com>
> ---
>   drivers/net/ethernet/mellanox/mlx5/core/en.h      | 3 +++
>   drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c  | 8 +++-----
>   drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
>   3 files changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> index 14e3207b14e7..2281154442d9 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> @@ -516,6 +516,9 @@ struct mlx5e_xdpsq {
>   	/* control path */
>   	struct mlx5_wq_ctrl        wq_ctrl;
>   	struct mlx5e_channel      *channel;
> +
> +	/* synchronize simultaneous xdp_xmit on the same ring */
> +	spinlock_t                 xdp_tx_lock;
>   } ____cacheline_aligned_in_smp;
>   
>   struct mlx5e_xdp_buff {
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> index 5d51600935a6..6225734b256a 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> @@ -855,13 +855,10 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
>   	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
>   		return -EINVAL;
>   
> -	sq_num = smp_processor_id();
> -
> -	if (unlikely(sq_num >= priv->channels.num))
> -		return -ENXIO;
> -
> +	sq_num = smp_processor_id() % priv->channels.num;

Modulo is a costly operation.
A while loop with subtraction would likely converge faster.

>   	sq = priv->channels.c[sq_num]->xdpsq;
>   
> +	spin_lock(&sq->xdp_tx_lock);
>   	for (i = 0; i < n; i++) {
>   		struct mlx5e_xmit_data_frags xdptxdf = {};
>   		struct xdp_frame *xdpf = frames[i];
> @@ -942,6 +939,7 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
>   	if (flags & XDP_XMIT_FLUSH)
>   		mlx5e_xmit_xdp_doorbell(sq);
>   
> +	spin_unlock(&sq->xdp_tx_lock);
>   	return nxmit;
>   }
>   
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index 9c46511e7b43..ced9eefe38aa 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -1559,6 +1559,8 @@ static int mlx5e_alloc_xdpsq(struct mlx5e_channel *c,
>   	if (err)
>   		goto err_sq_wq_destroy;
>   
> +	spin_lock_init(&sq->xdp_tx_lock);
> +
>   	return 0;
>   
>   err_sq_wq_destroy:


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
  2025-11-02 13:02 ` Tariq Toukan
@ 2025-11-03  0:13   ` Alexander Duyck
  2025-11-03 19:13     ` Zijian Zhang
  0 siblings, 1 reply; 5+ messages in thread
From: Alexander Duyck @ 2025-11-03  0:13 UTC (permalink / raw)
  To: Tariq Toukan
  Cc: Zijian Zhang, netdev, davem, kuba, pabeni, edumazet,
	andrew+netdev, saeedm, gal, leonro, witu, parav, tariqt, hkelam,
	Alexander Lobakin, Jesper Dangaard Brouer,
	Toke Høiland-Jørgensen, Lorenzo Bianconi,
	Jesse Brandeburg, Salil Mehta

On Sun, Nov 2, 2025 at 5:02 AM Tariq Toukan <ttoukan.linux@gmail.com> wrote:
> On 01/11/2025 1:10, Zijian Zhang wrote:

...

> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > index 5d51600935a6..6225734b256a 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > @@ -855,13 +855,10 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
> >       if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
> >               return -EINVAL;
> >
> > -     sq_num = smp_processor_id();
> > -
> > -     if (unlikely(sq_num >= priv->channels.num))
> > -             return -ENXIO;
> > -
> > +     sq_num = smp_processor_id() % priv->channels.num;
>
> Modulo is a costly operation.
> A while loop with subtraction would likely converge faster.

I agree. The modulo is optimizing for the worst exception case, and
heavily penalizing the case where it does nothing. A while loop in
most cases will likely just be a test and short jump which would be
two or three cycles whereas this would cost you somewhere in the 10s
of cycles for most processors as I recall.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection
  2025-11-03  0:13   ` Alexander Duyck
@ 2025-11-03 19:13     ` Zijian Zhang
  0 siblings, 0 replies; 5+ messages in thread
From: Zijian Zhang @ 2025-11-03 19:13 UTC (permalink / raw)
  To: Alexander Duyck, Tariq Toukan
  Cc: netdev, davem, kuba, pabeni, edumazet, andrew+netdev, saeedm, gal,
	leonro, witu, parav, tariqt, hkelam, Alexander Lobakin,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Lorenzo Bianconi, Jesse Brandeburg, Salil Mehta

Thanks for the info and explanation, that makes a lot of sense :)
Modulo here is too costly.

On 11/2/25 4:13 PM, Alexander Duyck wrote:
> On Sun, Nov 2, 2025 at 5:02 AM Tariq Toukan <ttoukan.linux@gmail.com> wrote:
>> On 01/11/2025 1:10, Zijian Zhang wrote:
> 
> ...
> 
>>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
>>> index 5d51600935a6..6225734b256a 100644
>>> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
>>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
>>> @@ -855,13 +855,10 @@ int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
>>>        if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
>>>                return -EINVAL;
>>>
>>> -     sq_num = smp_processor_id();
>>> -
>>> -     if (unlikely(sq_num >= priv->channels.num))
>>> -             return -ENXIO;
>>> -
>>> +     sq_num = smp_processor_id() % priv->channels.num;
>>
>> Modulo is a costly operation.
>> A while loop with subtraction would likely converge faster.
> 
> I agree. The modulo is optimizing for the worst exception case, and
> heavily penalizing the case where it does nothing. A while loop in
> most cases will likely just be a test and short jump which would be
> two or three cycles whereas this would cost you somewhere in the 10s
> of cycles for most processors as I recall.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-11-03 19:13 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-31 23:10 [PATCH net-next v2] net/mlx5e: Modify mlx5e_xdp_xmit sq selection Zijian Zhang
2025-10-31 23:31 ` Jakub Kicinski
2025-11-02 13:02 ` Tariq Toukan
2025-11-03  0:13   ` Alexander Duyck
2025-11-03 19:13     ` Zijian Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).