* [PATCH net-next v5 0/2] net: mana: Enforce TX SGE limit and fix error cleanup
@ 2025-11-14 21:16 Aditya Garg
2025-11-14 21:16 ` [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit Aditya Garg
2025-11-14 21:16 ` [PATCH net-next v5 2/2] net: mana: Drop TX skb on post_work_request failure and unmap resources Aditya Garg
0 siblings, 2 replies; 8+ messages in thread
From: Aditya Garg @ 2025-11-14 21:16 UTC (permalink / raw)
To: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet,
kuba, pabeni, longli, kotaranov, horms, shradhagupta, ssengar,
ernis, dipayanroy, shirazsaleem, leon, mlevitsk, yury.norov,
sbhatta, linux-hyperv, netdev, linux-kernel, linux-rdma,
gargaditya
Cc: Aditya Garg
Add pre-transmission checks to block SKBs that exceed the hardware's SGE
limit. Force software segmentation for GSO traffic and linearize non-GSO
packets as needed.
Update TX error handling to drop failed SKBs and unmap resources
immediately.
---
Changes in v5:
* Drop skb_is_gso() check for disabling GSO in mana_features_check().
* Register .ndo_features_check conditionally to avoid unnecessary call.
Changes in v4:
* Fix warning during build reported by kernel test robot
---
Aditya Garg (2):
net: mana: Handle SKB if TX SGEs exceed hardware limit
net: mana: Drop TX skb on post_work_request failure and unmap
resources
.../net/ethernet/microsoft/mana/gdma_main.c | 6 +--
drivers/net/ethernet/microsoft/mana/mana_en.c | 48 ++++++++++++++++---
.../ethernet/microsoft/mana/mana_ethtool.c | 2 +
include/net/mana/gdma.h | 8 +++-
include/net/mana/mana.h | 2 +
5 files changed, 54 insertions(+), 12 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit 2025-11-14 21:16 [PATCH net-next v5 0/2] net: mana: Enforce TX SGE limit and fix error cleanup Aditya Garg @ 2025-11-14 21:16 ` Aditya Garg 2025-11-14 21:26 ` Eric Dumazet ` (2 more replies) 2025-11-14 21:16 ` [PATCH net-next v5 2/2] net: mana: Drop TX skb on post_work_request failure and unmap resources Aditya Garg 1 sibling, 3 replies; 8+ messages in thread From: Aditya Garg @ 2025-11-14 21:16 UTC (permalink / raw) To: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet, kuba, pabeni, longli, kotaranov, horms, shradhagupta, ssengar, ernis, dipayanroy, shirazsaleem, leon, mlevitsk, yury.norov, sbhatta, linux-hyperv, netdev, linux-kernel, linux-rdma, gargaditya Cc: Aditya Garg The MANA hardware supports a maximum of 30 scatter-gather entries (SGEs) per TX WQE. Exceeding this limit can cause TX failures. Add ndo_features_check() callback to validate SKB layout before transmission. For GSO SKBs that would exceed the hardware SGE limit, clear NETIF_F_GSO_MASK to enforce software segmentation in the stack. Add a fallback in mana_start_xmit() to linearize non-GSO SKBs that still exceed the SGE limit. Also, Add ethtool counter for SKBs linearized Co-developed-by: Dipayaan Roy <dipayanroy@linux.microsoft.com> Signed-off-by: Dipayaan Roy <dipayanroy@linux.microsoft.com> Signed-off-by: Aditya Garg <gargaditya@linux.microsoft.com> --- Changes in v5: * Drop skb_is_gso() check for disabling GSO in mana_features_check(). * Register .ndo_features_check conditionally to avoid unnecessary call. Changes in v4: * No change. --- drivers/net/ethernet/microsoft/mana/mana_en.c | 41 ++++++++++++++++++- .../ethernet/microsoft/mana/mana_ethtool.c | 2 + include/net/mana/gdma.h | 8 +++- include/net/mana/mana.h | 1 + 4 files changed, 49 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index cccd5b63cee6..d92069954fd9 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -11,6 +11,7 @@ #include <linux/mm.h> #include <linux/pci.h> #include <linux/export.h> +#include <linux/skbuff.h> #include <net/checksum.h> #include <net/ip6_checksum.h> @@ -329,6 +330,22 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) cq = &apc->tx_qp[txq_idx].tx_cq; tx_stats = &txq->stats; + BUILD_BUG_ON(MAX_TX_WQE_SGL_ENTRIES != MANA_MAX_TX_WQE_SGL_ENTRIES); +#if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES) + if (skb_shinfo(skb)->nr_frags + 2 > MAX_TX_WQE_SGL_ENTRIES) { + /* GSO skb with Hardware SGE limit exceeded is not expected here + * as they are handled in mana_features_check() callback + */ + if (skb_linearize(skb)) { + netdev_warn_once(ndev, "Failed to linearize skb with nr_frags=%d and is_gso=%d\n", + skb_shinfo(skb)->nr_frags, + skb_is_gso(skb)); + goto tx_drop_count; + } + apc->eth_stats.linear_pkt_tx_cnt++; + } +#endif + pkg.tx_oob.s_oob.vcq_num = cq->gdma_id; pkg.tx_oob.s_oob.vsq_frame = txq->vsq_frame; @@ -442,8 +459,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) } } - WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); - if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { pkg.wqe_req.sgl = pkg.sgl_array; } else { @@ -518,6 +533,25 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) return NETDEV_TX_OK; } +#if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES) +static netdev_features_t mana_features_check(struct sk_buff *skb, + struct net_device *ndev, + netdev_features_t features) +{ + if (skb_shinfo(skb)->nr_frags + 2 > MAX_TX_WQE_SGL_ENTRIES) { + /* Exceeds HW SGE limit. + * GSO case: + * Disable GSO so the stack will software-segment the skb + * into smaller skbs that fit the SGE budget. + * Non-GSO case: + * The xmit path will attempt skb_linearize() as a fallback. + */ + features &= ~NETIF_F_GSO_MASK; + } + return features; +} +#endif + static void mana_get_stats64(struct net_device *ndev, struct rtnl_link_stats64 *st) { @@ -878,6 +912,9 @@ static const struct net_device_ops mana_devops = { .ndo_open = mana_open, .ndo_stop = mana_close, .ndo_select_queue = mana_select_queue, +#if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES) + .ndo_features_check = mana_features_check, +#endif .ndo_start_xmit = mana_start_xmit, .ndo_validate_addr = eth_validate_addr, .ndo_get_stats64 = mana_get_stats64, diff --git a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c index a1afa75a9463..fa5e1a2f06a9 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c +++ b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c @@ -71,6 +71,8 @@ static const struct mana_stats_desc mana_eth_stats[] = { {"tx_cq_err", offsetof(struct mana_ethtool_stats, tx_cqe_err)}, {"tx_cqe_unknown_type", offsetof(struct mana_ethtool_stats, tx_cqe_unknown_type)}, + {"linear_pkt_tx_cnt", offsetof(struct mana_ethtool_stats, + linear_pkt_tx_cnt)}, {"rx_coalesced_err", offsetof(struct mana_ethtool_stats, rx_coalesced_err)}, {"rx_cqe_unknown_type", offsetof(struct mana_ethtool_stats, diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 637f42485dba..6dae78dc468f 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -489,6 +489,8 @@ struct gdma_wqe { #define MAX_TX_WQE_SIZE 512 #define MAX_RX_WQE_SIZE 256 +#define MANA_MAX_TX_WQE_SGL_ENTRIES 30 + #define MAX_TX_WQE_SGL_ENTRIES ((GDMA_MAX_SQE_SIZE - \ sizeof(struct gdma_sge) - INLINE_OOB_SMALL_SIZE) / \ sizeof(struct gdma_sge)) @@ -592,6 +594,9 @@ enum { #define GDMA_DRV_CAP_FLAG_1_HANDLE_RECONFIG_EQE BIT(17) #define GDMA_DRV_CAP_FLAG_1_HW_VPORT_LINK_AWARE BIT(6) +/* Driver supports linearizing the skb when num_sge exceeds hardware limit */ +#define GDMA_DRV_CAP_FLAG_1_SKB_LINEARIZE BIT(20) + #define GDMA_DRV_CAP_FLAGS1 \ (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \ GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \ @@ -601,7 +606,8 @@ enum { GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT | \ GDMA_DRV_CAP_FLAG_1_SELF_RESET_ON_EQE | \ GDMA_DRV_CAP_FLAG_1_HANDLE_RECONFIG_EQE | \ - GDMA_DRV_CAP_FLAG_1_HW_VPORT_LINK_AWARE) + GDMA_DRV_CAP_FLAG_1_HW_VPORT_LINK_AWARE | \ + GDMA_DRV_CAP_FLAG_1_SKB_LINEARIZE) #define GDMA_DRV_CAP_FLAGS2 0 diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 8906901535f5..50a532fb30d6 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -404,6 +404,7 @@ struct mana_ethtool_stats { u64 hc_tx_err_gdma; u64 tx_cqe_err; u64 tx_cqe_unknown_type; + u64 linear_pkt_tx_cnt; u64 rx_coalesced_err; u64 rx_cqe_unknown_type; }; -- 2.43.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit 2025-11-14 21:16 ` [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit Aditya Garg @ 2025-11-14 21:26 ` Eric Dumazet 2025-11-16 21:36 ` Haiyang Zhang 2025-11-18 3:46 ` Jakub Kicinski 2 siblings, 0 replies; 8+ messages in thread From: Eric Dumazet @ 2025-11-14 21:26 UTC (permalink / raw) To: Aditya Garg Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, kuba, pabeni, longli, kotaranov, horms, shradhagupta, ssengar, ernis, dipayanroy, shirazsaleem, leon, mlevitsk, yury.norov, sbhatta, linux-hyperv, netdev, linux-kernel, linux-rdma, gargaditya On Fri, Nov 14, 2025 at 1:19 PM Aditya Garg <gargaditya@linux.microsoft.com> wrote: > > The MANA hardware supports a maximum of 30 scatter-gather entries (SGEs) > per TX WQE. Exceeding this limit can cause TX failures. > Add ndo_features_check() callback to validate SKB layout before > transmission. For GSO SKBs that would exceed the hardware SGE limit, clear > NETIF_F_GSO_MASK to enforce software segmentation in the stack. > Add a fallback in mana_start_xmit() to linearize non-GSO SKBs that still > exceed the SGE limit. > > Also, Add ethtool counter for SKBs linearized > > Co-developed-by: Dipayaan Roy <dipayanroy@linux.microsoft.com> > Signed-off-by: Dipayaan Roy <dipayanroy@linux.microsoft.com> > Signed-off-by: Aditya Garg <gargaditya@linux.microsoft.com> Reviewed-by: Eric Dumazet <edumazet@google.com> ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit 2025-11-14 21:16 ` [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit Aditya Garg 2025-11-14 21:26 ` Eric Dumazet @ 2025-11-16 21:36 ` Haiyang Zhang 2025-11-18 3:46 ` Jakub Kicinski 2 siblings, 0 replies; 8+ messages in thread From: Haiyang Zhang @ 2025-11-16 21:36 UTC (permalink / raw) To: Aditya Garg, KY Srinivasan, wei.liu@kernel.org, Dexuan Cui, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Long Li, Konstantin Taranov, horms@kernel.org, shradhagupta@linux.microsoft.com, ssengar@linux.microsoft.com, ernis@linux.microsoft.com, dipayanroy@linux.microsoft.com, Shiraz Saleem, leon@kernel.org, mlevitsk@redhat.com, yury.norov@gmail.com, sbhatta@marvell.com, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Aditya Garg > -----Original Message----- > From: Aditya Garg <gargaditya@linux.microsoft.com> > Sent: Friday, November 14, 2025 4:17 PM > To: KY Srinivasan <kys@microsoft.com>; Haiyang Zhang > <haiyangz@microsoft.com>; wei.liu@kernel.org; Dexuan Cui > <DECUI@microsoft.com>; andrew+netdev@lunn.ch; davem@davemloft.net; > edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; Long Li > <longli@microsoft.com>; Konstantin Taranov <kotaranov@microsoft.com>; > horms@kernel.org; shradhagupta@linux.microsoft.com; > ssengar@linux.microsoft.com; ernis@linux.microsoft.com; > dipayanroy@linux.microsoft.com; Shiraz Saleem > <shirazsaleem@microsoft.com>; leon@kernel.org; mlevitsk@redhat.com; > yury.norov@gmail.com; sbhatta@marvell.com; linux-hyperv@vger.kernel.org; > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; linux- > rdma@vger.kernel.org; Aditya Garg <gargaditya@microsoft.com> > Cc: Aditya Garg <gargaditya@linux.microsoft.com> > Subject: [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed > hardware limit > > The MANA hardware supports a maximum of 30 scatter-gather entries (SGEs) > per TX WQE. Exceeding this limit can cause TX failures. > Add ndo_features_check() callback to validate SKB layout before > transmission. For GSO SKBs that would exceed the hardware SGE limit, clear > NETIF_F_GSO_MASK to enforce software segmentation in the stack. > Add a fallback in mana_start_xmit() to linearize non-GSO SKBs that still > exceed the SGE limit. > > Also, Add ethtool counter for SKBs linearized > > Co-developed-by: Dipayaan Roy <dipayanroy@linux.microsoft.com> > Signed-off-by: Dipayaan Roy <dipayanroy@linux.microsoft.com> > Signed-off-by: Aditya Garg <gargaditya@linux.microsoft.com> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit 2025-11-14 21:16 ` [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit Aditya Garg 2025-11-14 21:26 ` Eric Dumazet 2025-11-16 21:36 ` Haiyang Zhang @ 2025-11-18 3:46 ` Jakub Kicinski 2025-11-18 11:08 ` Aditya Garg 2 siblings, 1 reply; 8+ messages in thread From: Jakub Kicinski @ 2025-11-18 3:46 UTC (permalink / raw) To: Aditya Garg Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet, pabeni, longli, kotaranov, horms, shradhagupta, ssengar, ernis, dipayanroy, shirazsaleem, leon, mlevitsk, yury.norov, sbhatta, linux-hyperv, netdev, linux-kernel, linux-rdma, gargaditya On Fri, 14 Nov 2025 13:16:42 -0800 Aditya Garg wrote: > The MANA hardware supports a maximum of 30 scatter-gather entries (SGEs) > per TX WQE. Exceeding this limit can cause TX failures. > Add ndo_features_check() callback to validate SKB layout before > transmission. For GSO SKBs that would exceed the hardware SGE limit, clear > NETIF_F_GSO_MASK to enforce software segmentation in the stack. > Add a fallback in mana_start_xmit() to linearize non-GSO SKBs that still > exceed the SGE limit. > + BUILD_BUG_ON(MAX_TX_WQE_SGL_ENTRIES != MANA_MAX_TX_WQE_SGL_ENTRIES); > +#if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES) > + if (skb_shinfo(skb)->nr_frags + 2 > MAX_TX_WQE_SGL_ENTRIES) { nit: please try to avoid the use of ifdef if you can. This helps to avoid build breakage sneaking in as this code will be compiled out on default config on all platforms. Instead you should be able to simply add the static condition to the if statement: if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES && skb_shinfo(skb)->nr_frags + 2 > MAX_TX_WQE_SGL_ENTRIES) { and let the compiler (rather than preprocessor) eliminate this if () block. > + /* GSO skb with Hardware SGE limit exceeded is not expected here > + * as they are handled in mana_features_check() callback > + */ > + if (skb_linearize(skb)) { > + netdev_warn_once(ndev, "Failed to linearize skb with nr_frags=%d and is_gso=%d\n", > + skb_shinfo(skb)->nr_frags, > + skb_is_gso(skb)); > + goto tx_drop_count; > + } > + apc->eth_stats.linear_pkt_tx_cnt++; > + } > +#endif -- pw-bot: cr ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit 2025-11-18 3:46 ` Jakub Kicinski @ 2025-11-18 11:08 ` Aditya Garg 0 siblings, 0 replies; 8+ messages in thread From: Aditya Garg @ 2025-11-18 11:08 UTC (permalink / raw) To: Jakub Kicinski Cc: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet, pabeni, longli, kotaranov, horms, shradhagupta, ssengar, ernis, dipayanroy, shirazsaleem, leon, mlevitsk, yury.norov, sbhatta, linux-hyperv, netdev, linux-kernel, linux-rdma, gargaditya On 18-11-2025 09:16, Jakub Kicinski wrote: > On Fri, 14 Nov 2025 13:16:42 -0800 Aditya Garg wrote: >> The MANA hardware supports a maximum of 30 scatter-gather entries (SGEs) >> per TX WQE. Exceeding this limit can cause TX failures. >> Add ndo_features_check() callback to validate SKB layout before >> transmission. For GSO SKBs that would exceed the hardware SGE limit, clear >> NETIF_F_GSO_MASK to enforce software segmentation in the stack. >> Add a fallback in mana_start_xmit() to linearize non-GSO SKBs that still >> exceed the SGE limit. > >> + BUILD_BUG_ON(MAX_TX_WQE_SGL_ENTRIES != MANA_MAX_TX_WQE_SGL_ENTRIES); >> +#if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES) >> + if (skb_shinfo(skb)->nr_frags + 2 > MAX_TX_WQE_SGL_ENTRIES) { > > nit: please try to avoid the use of ifdef if you can. This helps to > avoid build breakage sneaking in as this code will be compiled out > on default config on all platforms. > > Instead you should be able to simply add the static condition to the > if statement: > > if (MAX_SKB_FRAGS + 2 > MANA_MAX_TX_WQE_SGL_ENTRIES && > skb_shinfo(skb)->nr_frags + 2 > MAX_TX_WQE_SGL_ENTRIES) { > > and let the compiler (rather than preprocessor) eliminate this if () > block. > Thanks for review and explanation Jakub, I will incorporate this change in next revision. Regards, Aditya ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH net-next v5 2/2] net: mana: Drop TX skb on post_work_request failure and unmap resources 2025-11-14 21:16 [PATCH net-next v5 0/2] net: mana: Enforce TX SGE limit and fix error cleanup Aditya Garg 2025-11-14 21:16 ` [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit Aditya Garg @ 2025-11-14 21:16 ` Aditya Garg 2025-11-16 21:44 ` Haiyang Zhang 1 sibling, 1 reply; 8+ messages in thread From: Aditya Garg @ 2025-11-14 21:16 UTC (permalink / raw) To: kys, haiyangz, wei.liu, decui, andrew+netdev, davem, edumazet, kuba, pabeni, longli, kotaranov, horms, shradhagupta, ssengar, ernis, dipayanroy, shirazsaleem, leon, mlevitsk, yury.norov, sbhatta, linux-hyperv, netdev, linux-kernel, linux-rdma, gargaditya Cc: Aditya Garg Drop TX packets when posting the work request fails and ensure DMA mappings are always cleaned up. Signed-off-by: Aditya Garg <gargaditya@linux.microsoft.com> --- Changes in v5: * No change. Changes in v4: * Fix warning during build reported by kernel test robot --- drivers/net/ethernet/microsoft/mana/gdma_main.c | 6 +----- drivers/net/ethernet/microsoft/mana/mana_en.c | 7 +++---- include/net/mana/mana.h | 1 + 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index effe0a2f207a..8fd70b34807a 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1300,7 +1300,6 @@ int mana_gd_post_work_request(struct gdma_queue *wq, struct gdma_posted_wqe_info *wqe_info) { u32 client_oob_size = wqe_req->inline_oob_size; - struct gdma_context *gc; u32 sgl_data_size; u32 max_wqe_size; u32 wqe_size; @@ -1330,11 +1329,8 @@ int mana_gd_post_work_request(struct gdma_queue *wq, if (wqe_size > max_wqe_size) return -EINVAL; - if (wq->monitor_avl_buf && wqe_size > mana_gd_wq_avail_space(wq)) { - gc = wq->gdma_dev->gdma_context; - dev_err(gc->dev, "unsuccessful flow control!\n"); + if (wq->monitor_avl_buf && wqe_size > mana_gd_wq_avail_space(wq)) return -ENOSPC; - } if (wqe_info) wqe_info->wqe_size_in_bu = wqe_size / GDMA_WQE_BU_SIZE; diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index d92069954fd9..d656c0882343 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -493,9 +493,9 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) if (err) { (void)skb_dequeue_tail(&txq->pending_skbs); + mana_unmap_skb(skb, apc); netdev_warn(ndev, "Failed to post TX OOB: %d\n", err); - err = NETDEV_TX_BUSY; - goto tx_busy; + goto free_sgl_ptr; } err = NETDEV_TX_OK; @@ -515,7 +515,6 @@ netdev_tx_t mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) tx_stats->bytes += len + ((num_gso_seg - 1) * gso_hs); u64_stats_update_end(&tx_stats->syncp); -tx_busy: if (netif_tx_queue_stopped(net_txq) && mana_can_tx(gdma_sq)) { netif_tx_wake_queue(net_txq); apc->eth_stats.wake_queue++; @@ -1683,7 +1682,7 @@ static int mana_move_wq_tail(struct gdma_queue *wq, u32 num_units) return 0; } -static void mana_unmap_skb(struct sk_buff *skb, struct mana_port_context *apc) +void mana_unmap_skb(struct sk_buff *skb, struct mana_port_context *apc) { struct mana_skb_head *ash = (struct mana_skb_head *)skb->head; struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 50a532fb30d6..d05457d3e1ab 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -585,6 +585,7 @@ int mana_set_bw_clamp(struct mana_port_context *apc, u32 speed, void mana_query_phy_stats(struct mana_port_context *apc); int mana_pre_alloc_rxbufs(struct mana_port_context *apc, int mtu, int num_queues); void mana_pre_dealloc_rxbufs(struct mana_port_context *apc); +void mana_unmap_skb(struct sk_buff *skb, struct mana_port_context *apc); extern const struct ethtool_ops mana_ethtool_ops; extern struct dentry *mana_debugfs_root; -- 2.43.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* RE: [PATCH net-next v5 2/2] net: mana: Drop TX skb on post_work_request failure and unmap resources 2025-11-14 21:16 ` [PATCH net-next v5 2/2] net: mana: Drop TX skb on post_work_request failure and unmap resources Aditya Garg @ 2025-11-16 21:44 ` Haiyang Zhang 0 siblings, 0 replies; 8+ messages in thread From: Haiyang Zhang @ 2025-11-16 21:44 UTC (permalink / raw) To: Aditya Garg, KY Srinivasan, wei.liu@kernel.org, Dexuan Cui, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Long Li, Konstantin Taranov, horms@kernel.org, shradhagupta@linux.microsoft.com, ssengar@linux.microsoft.com, ernis@linux.microsoft.com, dipayanroy@linux.microsoft.com, Shiraz Saleem, leon@kernel.org, mlevitsk@redhat.com, yury.norov@gmail.com, sbhatta@marvell.com, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Aditya Garg > -----Original Message----- > From: Aditya Garg <gargaditya@linux.microsoft.com> > Sent: Friday, November 14, 2025 4:17 PM > To: KY Srinivasan <kys@microsoft.com>; Haiyang Zhang > <haiyangz@microsoft.com>; wei.liu@kernel.org; Dexuan Cui > <DECUI@microsoft.com>; andrew+netdev@lunn.ch; davem@davemloft.net; > edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; Long Li > <longli@microsoft.com>; Konstantin Taranov <kotaranov@microsoft.com>; > horms@kernel.org; shradhagupta@linux.microsoft.com; > ssengar@linux.microsoft.com; ernis@linux.microsoft.com; > dipayanroy@linux.microsoft.com; Shiraz Saleem > <shirazsaleem@microsoft.com>; leon@kernel.org; mlevitsk@redhat.com; > yury.norov@gmail.com; sbhatta@marvell.com; linux-hyperv@vger.kernel.org; > netdev@vger.kernel.org; linux-kernel@vger.kernel.org; linux- > rdma@vger.kernel.org; Aditya Garg <gargaditya@microsoft.com> > Cc: Aditya Garg <gargaditya@linux.microsoft.com> > Subject: [PATCH net-next v5 2/2] net: mana: Drop TX skb on > post_work_request failure and unmap resources > > Drop TX packets when posting the work request fails and ensure DMA > mappings are always cleaned up. > > Signed-off-by: Aditya Garg <gargaditya@linux.microsoft.com> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-11-18 11:08 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-11-14 21:16 [PATCH net-next v5 0/2] net: mana: Enforce TX SGE limit and fix error cleanup Aditya Garg 2025-11-14 21:16 ` [PATCH net-next v5 1/2] net: mana: Handle SKB if TX SGEs exceed hardware limit Aditya Garg 2025-11-14 21:26 ` Eric Dumazet 2025-11-16 21:36 ` Haiyang Zhang 2025-11-18 3:46 ` Jakub Kicinski 2025-11-18 11:08 ` Aditya Garg 2025-11-14 21:16 ` [PATCH net-next v5 2/2] net: mana: Drop TX skb on post_work_request failure and unmap resources Aditya Garg 2025-11-16 21:44 ` Haiyang Zhang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).