From: Joe Damato <joe@dama.to>
To: netdev@vger.kernel.org, Michael Chan <michael.chan@broadcom.com>,
Pavan Chebbi <pavan.chebbi@broadcom.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: horms@kernel.org, linux-kernel@vger.kernel.org, leon@kernel.org,
Joe Damato <joe@dama.to>
Subject: [net-next v5 05/12] net: bnxt: Use dma_unmap_len for TX completion unmapping
Date: Mon, 23 Mar 2026 11:38:30 -0700 [thread overview]
Message-ID: <20260323183844.3146982-6-joe@dama.to> (raw)
In-Reply-To: <20260323183844.3146982-1-joe@dama.to>
Store the DMA mapping length in each TX buffer descriptor via
dma_unmap_len_set at submit time, and use dma_unmap_len at completion
time.
This is a no-op for normal packets but prepares for software USO,
where header BDs set dma_unmap_len to 0 because the header buffer
is unmapped collectively rather than per-segment.
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Pavan Chebbi <pavan.chebbi@broadcom.com>
Signed-off-by: Joe Damato <joe@dama.to>
---
v4:
- Added Pavan's Reviewed-by tag. No functional changes.
rfcv2:
- Use some local variables to shorten long lines. No functional change from
rfcv1.
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 63 ++++++++++++++---------
1 file changed, 40 insertions(+), 23 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 4d4e7643f7dd..fe15b32b12e7 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -656,6 +656,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
goto tx_free;
dma_unmap_addr_set(tx_buf, mapping, mapping);
+ dma_unmap_len_set(tx_buf, len, len);
flags = (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD |
TX_BD_CNT(last_frag + 2);
@@ -720,6 +721,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)];
netmem_dma_unmap_addr_set(skb_frag_netmem(frag), tx_buf,
mapping, mapping);
+ dma_unmap_len_set(tx_buf, len, len);
txbd->tx_bd_haddr = cpu_to_le64(mapping);
@@ -809,7 +811,8 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
u16 hw_cons = txr->tx_hw_cons;
unsigned int tx_bytes = 0;
u16 cons = txr->tx_cons;
- skb_frag_t *frag;
+ unsigned int dma_len;
+ dma_addr_t dma_addr;
int tx_pkts = 0;
bool rc = false;
@@ -844,19 +847,27 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr,
goto next_tx_int;
}
- dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping),
- skb_headlen(skb), DMA_TO_DEVICE);
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_addr = dma_unmap_addr(tx_buf, mapping);
+ dma_len = dma_unmap_len(tx_buf, len);
+
+ dma_unmap_single(&pdev->dev, dma_addr, dma_len,
+ DMA_TO_DEVICE);
+ }
+
last = tx_buf->nr_frags;
for (j = 0; j < last; j++) {
- frag = &skb_shinfo(skb)->frags[j];
cons = NEXT_TX(cons);
tx_buf = &txr->tx_buf_ring[RING_TX(bp, cons)];
- netmem_dma_unmap_page_attrs(&pdev->dev,
- dma_unmap_addr(tx_buf,
- mapping),
- skb_frag_size(frag),
- DMA_TO_DEVICE, 0);
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_addr = dma_unmap_addr(tx_buf, mapping);
+ dma_len = dma_unmap_len(tx_buf, len);
+
+ netmem_dma_unmap_page_attrs(&pdev->dev,
+ dma_addr, dma_len,
+ DMA_TO_DEVICE, 0);
+ }
}
if (unlikely(is_ts_pkt)) {
if (BNXT_CHIP_P5(bp)) {
@@ -3402,6 +3413,8 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *bp,
{
int i, max_idx;
struct pci_dev *pdev = bp->pdev;
+ unsigned int dma_len;
+ dma_addr_t dma_addr;
max_idx = bp->tx_nr_pages * TX_DESC_CNT;
@@ -3412,10 +3425,10 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *bp,
if (idx < bp->tx_nr_rings_xdp &&
tx_buf->action == XDP_REDIRECT) {
- dma_unmap_single(&pdev->dev,
- dma_unmap_addr(tx_buf, mapping),
- dma_unmap_len(tx_buf, len),
- DMA_TO_DEVICE);
+ dma_addr = dma_unmap_addr(tx_buf, mapping);
+ dma_len = dma_unmap_len(tx_buf, len);
+
+ dma_unmap_single(&pdev->dev, dma_addr, dma_len, DMA_TO_DEVICE);
xdp_return_frame(tx_buf->xdpf);
tx_buf->action = 0;
tx_buf->xdpf = NULL;
@@ -3437,23 +3450,27 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *bp,
continue;
}
- dma_unmap_single(&pdev->dev,
- dma_unmap_addr(tx_buf, mapping),
- skb_headlen(skb),
- DMA_TO_DEVICE);
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_addr = dma_unmap_addr(tx_buf, mapping);
+ dma_len = dma_unmap_len(tx_buf, len);
+
+ dma_unmap_single(&pdev->dev, dma_addr, dma_len, DMA_TO_DEVICE);
+ }
last = tx_buf->nr_frags;
i += 2;
for (j = 0; j < last; j++, i++) {
int ring_idx = i & bp->tx_ring_mask;
- skb_frag_t *frag = &skb_shinfo(skb)->frags[j];
tx_buf = &txr->tx_buf_ring[ring_idx];
- netmem_dma_unmap_page_attrs(&pdev->dev,
- dma_unmap_addr(tx_buf,
- mapping),
- skb_frag_size(frag),
- DMA_TO_DEVICE, 0);
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_addr = dma_unmap_addr(tx_buf, mapping);
+ dma_len = dma_unmap_len(tx_buf, len);
+
+ netmem_dma_unmap_page_attrs(&pdev->dev,
+ dma_addr, dma_len,
+ DMA_TO_DEVICE, 0);
+ }
}
dev_kfree_skb(skb);
}
--
2.52.0
next prev parent reply other threads:[~2026-03-23 18:39 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-23 18:38 [net-next v5 00/12] Add TSO map-once DMA helpers and bnxt SW USO support Joe Damato
2026-03-23 18:38 ` [net-next v5 01/12] net: tso: Introduce tso_dma_map Joe Damato
2026-03-23 18:38 ` [net-next v5 02/12] net: tso: Add tso_dma_map helpers Joe Damato
2026-03-23 18:38 ` [net-next v5 03/12] net: bnxt: Export bnxt_xmit_get_cfa_action Joe Damato
2026-03-23 18:38 ` [net-next v5 04/12] net: bnxt: Add a helper for tx_bd_ext Joe Damato
2026-03-23 18:38 ` Joe Damato [this message]
2026-03-23 18:38 ` [net-next v5 06/12] net: bnxt: Add TX inline buffer infrastructure Joe Damato
2026-03-23 18:38 ` [net-next v5 07/12] net: bnxt: Add boilerplate GSO code Joe Damato
2026-03-23 18:38 ` [net-next v5 08/12] net: bnxt: Implement software USO Joe Damato
2026-03-23 18:38 ` [net-next v5 09/12] net: bnxt: Add SW GSO completion and teardown support Joe Damato
2026-03-26 12:39 ` Paolo Abeni
2026-03-26 17:02 ` Joe Damato
2026-03-26 17:20 ` Paolo Abeni
2026-03-23 18:38 ` [net-next v5 10/12] net: bnxt: Dispatch to SW USO Joe Damato
2026-03-23 18:38 ` [net-next v5 11/12] net: netdevsim: Add support for " Joe Damato
2026-03-23 18:38 ` [net-next v5 12/12] selftests: drv-net: Add USO test Joe Damato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260323183844.3146982-6-joe@dama.to \
--to=joe@dama.to \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pavan.chebbi@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox