* [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements
@ 2026-02-03 7:21 Tariq Toukan
2026-02-03 7:21 ` [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode Tariq Toukan
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Tariq Toukan @ 2026-02-03 7:21 UTC (permalink / raw)
To: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
David S. Miller
Cc: Saeed Mahameed, Tariq Toukan, Mark Bloch, Leon Romanovsky, netdev,
linux-rdma, linux-kernel, Gal Pressman, Dragos Tatulea,
Cosmin Ratiu, Moshe Shemesh
Hi,
This series by Dragos introduces multiple RX datapath enhancements to
the mlx5e driver.
First patch adds SW handling for oversized packets in non-linear SKB
mode.
Second patch adds a reclaim mechanism to mitigate memory allocation
failures with memory providers.
Regards,
Tariq
V2:
- Fix duplicate empty lines (Paolo).
- Drop patch #3.
- Link to V1: https://lore.kernel.org/all/1768224129-1600265-1-git-send-email-tariqt@nvidia.com/
Dragos Tatulea (2):
net/mlx5e: RX, Drop oversized packets in non-linear mode
net/mlx5e: SHAMPO, Improve allocation recovery
.../net/ethernet/mellanox/mlx5/core/en_main.c | 25 +------------
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 37 +++++++++++++++++--
include/linux/mlx5/device.h | 5 +++
3 files changed, 41 insertions(+), 26 deletions(-)
base-commit: a22f57757f7e88c890499265c383ecb32900b645
--
2.44.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode
2026-02-03 7:21 [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements Tariq Toukan
@ 2026-02-03 7:21 ` Tariq Toukan
2026-02-04 22:55 ` Jacob Keller
2026-02-03 7:21 ` [PATCH net-next V2 2/2] net/mlx5e: SHAMPO, Improve allocation recovery Tariq Toukan
2026-02-05 5:30 ` [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements patchwork-bot+netdevbpf
2 siblings, 1 reply; 6+ messages in thread
From: Tariq Toukan @ 2026-02-03 7:21 UTC (permalink / raw)
To: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
David S. Miller
Cc: Saeed Mahameed, Tariq Toukan, Mark Bloch, Leon Romanovsky, netdev,
linux-rdma, linux-kernel, Gal Pressman, Dragos Tatulea,
Cosmin Ratiu, Moshe Shemesh
From: Dragos Tatulea <dtatulea@nvidia.com>
Currently the driver has an inconsistent behaviour between modes when it
comes to oversized packets that are not dropped through the physical MTU
check in HW. This can happen for Multi Host configurations where each
port has a different MTU.
Current behavior:
1) Striding RQ in linear mode drops the packet in SW and counts it
with oversize_pkts_sw_drop.
2) Striding RQ in non-linear mode allows it like a normal packet.
3) Legacy RQ can't receive oversized packets by design:
the RX WQE uses MTU sized packet buffers.
This inconsistency is not a violation of the netdev policy [1]
but it is better to be consistent across modes.
This patch aligns (2) with (1) and (3). One exception is added for
LRO: don't drop the oversized packet if it is an LRO packet.
As now rq->hw_mtu always needs to be updated during the MTU change flow,
drop the reset avoidance optimization from mlx5e_change_mtu().
Extract the CQE LRO segments reading into a helper function as it
is used twice now.
[1] Documentation/networking/netdevices.rst#L205
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
.../net/ethernet/mellanox/mlx5/core/en_main.c | 25 ++-----------------
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 11 +++++++-
include/linux/mlx5/device.h | 5 ++++
3 files changed, 17 insertions(+), 24 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 96dc6a6dc737..71e663c3b421 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -4716,7 +4716,6 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_params new_params;
struct mlx5e_params *params;
- bool reset = true;
int err = 0;
mutex_lock(&priv->state_lock);
@@ -4742,28 +4741,8 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
goto out;
}
- if (params->packet_merge.type == MLX5E_PACKET_MERGE_LRO)
- reset = false;
-
- if (params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ &&
- params->packet_merge.type != MLX5E_PACKET_MERGE_SHAMPO) {
- bool is_linear_old = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev, params, NULL);
- bool is_linear_new = mlx5e_rx_mpwqe_is_linear_skb(priv->mdev,
- &new_params, NULL);
- u8 sz_old = mlx5e_mpwqe_get_log_rq_size(priv->mdev, params, NULL);
- u8 sz_new = mlx5e_mpwqe_get_log_rq_size(priv->mdev, &new_params, NULL);
-
- /* Always reset in linear mode - hw_mtu is used in data path.
- * Check that the mode was non-linear and didn't change.
- * If XSK is active, XSK RQs are linear.
- * Reset if the RQ size changed, even if it's non-linear.
- */
- if (!is_linear_old && !is_linear_new && !priv->xsk.refcnt &&
- sz_old == sz_new)
- reset = false;
- }
-
- err = mlx5e_safe_switch_params(priv, &new_params, preactivate, NULL, reset);
+ err = mlx5e_safe_switch_params(priv, &new_params, preactivate, NULL,
+ true);
out:
WRITE_ONCE(netdev->mtu, params->sw_mtu);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 1fc3720d2201..05b682327305 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1574,7 +1574,7 @@ static inline bool mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
struct mlx5e_rq *rq,
struct sk_buff *skb)
{
- u8 lro_num_seg = be32_to_cpu(cqe->srqn) >> 24;
+ u8 lro_num_seg = get_cqe_lro_num_seg(cqe);
struct mlx5e_rq_stats *stats = rq->stats;
struct net_device *netdev = rq->netdev;
@@ -2058,6 +2058,15 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w
u16 linear_hr;
void *va;
+ if (unlikely(cqe_bcnt > rq->hw_mtu)) {
+ u8 lro_num_seg = get_cqe_lro_num_seg(cqe);
+
+ if (lro_num_seg <= 1) {
+ rq->stats->oversize_pkts_sw_drop++;
+ return NULL;
+ }
+ }
+
prog = rcu_dereference(rq->xdp_prog);
if (prog) {
diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
index d7f46a8fbfa1..b37fe39cef27 100644
--- a/include/linux/mlx5/device.h
+++ b/include/linux/mlx5/device.h
@@ -962,6 +962,11 @@ static inline u16 get_cqe_flow_tag(struct mlx5_cqe64 *cqe)
return be32_to_cpu(cqe->sop_drop_qpn) & 0xFFF;
}
+static inline u8 get_cqe_lro_num_seg(struct mlx5_cqe64 *cqe)
+{
+ return be32_to_cpu(cqe->srqn) >> 24;
+}
+
#define MLX5_MPWQE_LOG_NUM_STRIDES_EXT_BASE 3
#define MLX5_MPWQE_LOG_NUM_STRIDES_BASE 9
#define MLX5_MPWQE_LOG_NUM_STRIDES_MAX 16
--
2.44.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH net-next V2 2/2] net/mlx5e: SHAMPO, Improve allocation recovery
2026-02-03 7:21 [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements Tariq Toukan
2026-02-03 7:21 ` [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode Tariq Toukan
@ 2026-02-03 7:21 ` Tariq Toukan
2026-02-04 22:57 ` Jacob Keller
2026-02-05 5:30 ` [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements patchwork-bot+netdevbpf
2 siblings, 1 reply; 6+ messages in thread
From: Tariq Toukan @ 2026-02-03 7:21 UTC (permalink / raw)
To: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
David S. Miller
Cc: Saeed Mahameed, Tariq Toukan, Mark Bloch, Leon Romanovsky, netdev,
linux-rdma, linux-kernel, Gal Pressman, Dragos Tatulea,
Cosmin Ratiu, Moshe Shemesh
From: Dragos Tatulea <dtatulea@nvidia.com>
When memory providers are used, there is a disconnect between the
page_pool size and the available memory in the provider. This means
that the page_pool can run out of memory if the user didn't provision
a large enough buffer.
Under these conditions, mlx5 gets stuck trying to allocate new
buffers without being able to release existing buffers. This happens due
to the optimization introduced in commit 4c2a13236807
("net/mlx5e: RX, Defer page release in striding rq for better recycling")
which delays WQE releases to increase the chance of page_pool direct
recycling. The optimization was developed before memory providers
existed and this circumstance was not considered.
This patch unblocks the queue by reclaiming pages from WQEs that can be
freed and doing a one-shot retry. A WQE can be freed when:
1) All its strides have been consumed (WQE is no longer in linked list).
2) The WQE pages/netmems have not been previously released.
This reclaim mechanism is useful for regular pages as well.
Note that provisioning memory that can't fill even one MPWQE (64
4K pages) will still render the queue unusable. Same when
the application doesn't release its buffers for various reasons.
Or a combination of the two: a very small buffer is provisioned,
application releases buffers in bulk, bulk size never reached
=> queue is stuck.
Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 26 +++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 05b682327305..849e1f16482a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -1087,11 +1087,24 @@ int mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
return i;
}
+static void mlx5e_reclaim_mpwqe_pages(struct mlx5e_rq *rq, int head,
+ int reclaim)
+{
+ struct mlx5_wq_ll *wq = &rq->mpwqe.wq;
+
+ for (int i = 0; i < reclaim; i++) {
+ head = mlx5_wq_ll_get_wqe_next_ix(wq, head);
+
+ mlx5e_dealloc_rx_mpwqe(rq, head);
+ }
+}
+
INDIRECT_CALLABLE_SCOPE bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
{
struct mlx5_wq_ll *wq = &rq->mpwqe.wq;
u8 umr_completed = rq->mpwqe.umr_completed;
struct mlx5e_icosq *sq = rq->icosq;
+ bool reclaimed = false;
int alloc_err = 0;
u8 missing, i;
u16 head;
@@ -1126,11 +1139,20 @@ INDIRECT_CALLABLE_SCOPE bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
/* Deferred free for better page pool cache usage. */
mlx5e_free_rx_mpwqe(rq, wi);
+retry:
alloc_err = rq->xsk_pool ? mlx5e_xsk_alloc_rx_mpwqe(rq, head) :
mlx5e_alloc_rx_mpwqe(rq, head);
+ if (unlikely(alloc_err)) {
+ int reclaim = i - 1;
- if (unlikely(alloc_err))
- break;
+ if (reclaimed || !reclaim)
+ break;
+
+ mlx5e_reclaim_mpwqe_pages(rq, head, reclaim);
+ reclaimed = true;
+
+ goto retry;
+ }
head = mlx5_wq_ll_get_wqe_next_ix(wq, head);
} while (--i);
--
2.44.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode
2026-02-03 7:21 ` [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode Tariq Toukan
@ 2026-02-04 22:55 ` Jacob Keller
0 siblings, 0 replies; 6+ messages in thread
From: Jacob Keller @ 2026-02-04 22:55 UTC (permalink / raw)
To: Tariq Toukan, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Andrew Lunn, David S. Miller
Cc: Saeed Mahameed, Mark Bloch, Leon Romanovsky, netdev, linux-rdma,
linux-kernel, Gal Pressman, Dragos Tatulea, Cosmin Ratiu,
Moshe Shemesh
On 2/2/2026 11:21 PM, Tariq Toukan wrote:
> From: Dragos Tatulea <dtatulea@nvidia.com>
>
> Currently the driver has an inconsistent behaviour between modes when it
> comes to oversized packets that are not dropped through the physical MTU
> check in HW. This can happen for Multi Host configurations where each
> port has a different MTU.
>
> Current behavior:
>
> 1) Striding RQ in linear mode drops the packet in SW and counts it
> with oversize_pkts_sw_drop.
>
> 2) Striding RQ in non-linear mode allows it like a normal packet.
>
> 3) Legacy RQ can't receive oversized packets by design:
> the RX WQE uses MTU sized packet buffers.
>
> This inconsistency is not a violation of the netdev policy [1]
> but it is better to be consistent across modes.
>
> This patch aligns (2) with (1) and (3). One exception is added for
> LRO: don't drop the oversized packet if it is an LRO packet.
>
The doc also says that the preference is to drop packets, so this makes
sense.
> As now rq->hw_mtu always needs to be updated during the MTU change flow,
> drop the reset avoidance optimization from mlx5e_change_mtu().
>
> Extract the CQE LRO segments reading into a helper function as it
> is used twice now.
>
> [1] Documentation/networking/netdevices.rst#L205
>
> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
> ---
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V2 2/2] net/mlx5e: SHAMPO, Improve allocation recovery
2026-02-03 7:21 ` [PATCH net-next V2 2/2] net/mlx5e: SHAMPO, Improve allocation recovery Tariq Toukan
@ 2026-02-04 22:57 ` Jacob Keller
0 siblings, 0 replies; 6+ messages in thread
From: Jacob Keller @ 2026-02-04 22:57 UTC (permalink / raw)
To: Tariq Toukan, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Andrew Lunn, David S. Miller
Cc: Saeed Mahameed, Mark Bloch, Leon Romanovsky, netdev, linux-rdma,
linux-kernel, Gal Pressman, Dragos Tatulea, Cosmin Ratiu,
Moshe Shemesh
On 2/2/2026 11:21 PM, Tariq Toukan wrote:
> From: Dragos Tatulea <dtatulea@nvidia.com>
>
> When memory providers are used, there is a disconnect between the
> page_pool size and the available memory in the provider. This means
> that the page_pool can run out of memory if the user didn't provision
> a large enough buffer.
>
> Under these conditions, mlx5 gets stuck trying to allocate new
> buffers without being able to release existing buffers. This happens due
> to the optimization introduced in commit 4c2a13236807
> ("net/mlx5e: RX, Defer page release in striding rq for better recycling")
> which delays WQE releases to increase the chance of page_pool direct
> recycling. The optimization was developed before memory providers
> existed and this circumstance was not considered.
>
> This patch unblocks the queue by reclaiming pages from WQEs that can be
> freed and doing a one-shot retry. A WQE can be freed when:
> 1) All its strides have been consumed (WQE is no longer in linked list).
> 2) The WQE pages/netmems have not been previously released.
>
> This reclaim mechanism is useful for regular pages as well.
>
> Note that provisioning memory that can't fill even one MPWQE (64
> 4K pages) will still render the queue unusable. Same when
> the application doesn't release its buffers for various reasons.
> Or a combination of the two: a very small buffer is provisioned,
> application releases buffers in bulk, bulk size never reached
> => queue is stuck.
>
> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com>
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
> ---
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements
2026-02-03 7:21 [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements Tariq Toukan
2026-02-03 7:21 ` [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode Tariq Toukan
2026-02-03 7:21 ` [PATCH net-next V2 2/2] net/mlx5e: SHAMPO, Improve allocation recovery Tariq Toukan
@ 2026-02-05 5:30 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-05 5:30 UTC (permalink / raw)
To: Tariq Toukan
Cc: edumazet, kuba, pabeni, andrew+netdev, davem, saeedm, mbloch,
leon, netdev, linux-rdma, linux-kernel, gal, dtatulea, cratiu,
moshe
Hello:
This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Tue, 3 Feb 2026 09:21:28 +0200 you wrote:
> Hi,
>
> This series by Dragos introduces multiple RX datapath enhancements to
> the mlx5e driver.
>
> First patch adds SW handling for oversized packets in non-linear SKB
> mode.
>
> [...]
Here is the summary with links:
- [net-next,V2,1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode
https://git.kernel.org/netdev/net-next/c/7ed7a576f20a
- [net-next,V2,2/2] net/mlx5e: SHAMPO, Improve allocation recovery
https://git.kernel.org/netdev/net-next/c/09e6960e8435
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-02-05 5:30 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-03 7:21 [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements Tariq Toukan
2026-02-03 7:21 ` [PATCH net-next V2 1/2] net/mlx5e: RX, Drop oversized packets in non-linear mode Tariq Toukan
2026-02-04 22:55 ` Jacob Keller
2026-02-03 7:21 ` [PATCH net-next V2 2/2] net/mlx5e: SHAMPO, Improve allocation recovery Tariq Toukan
2026-02-04 22:57 ` Jacob Keller
2026-02-05 5:30 ` [PATCH net-next V2 0/2] net/mlx5e: RX datapath enhancements patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox