public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Joe Damato <joe@dama.to>
To: netdev@vger.kernel.org, "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Simon Horman <horms@kernel.org>
Cc: michael.chan@broadcom.com, pavan.chebbi@broadcom.com,
	linux-kernel@vger.kernel.org, Joe Damato <joe@dama.to>
Subject: [RFC net-next 02/10] net: tso: Add tso_dma_map helpers
Date: Tue, 10 Mar 2026 14:21:50 -0700	[thread overview]
Message-ID: <20260310212209.2263939-3-joe@dama.to> (raw)
In-Reply-To: <20260310212209.2263939-1-joe@dama.to>

Add helpers to initialize, iterate, and clean up a tso_dma_map:

tso_dma_map_init(): DMA-maps the linear payload region and all frags
upfront into the tso_dma_map struct. Returns 0 on success, cleans up
partial mappings on failure.

tso_dma_map_cleanup(): unmaps all DMA regions. Used on error paths.

tso_dma_map_count(): counts how many descriptors the next N bytes of
payload will need, without advancing the iterator.

tso_dma_map_next(): yields the next (dma_addr, chunk_len) pair.
Indicates when a chunk starts a new DMA mapping so the driver can set
dma_unmap_len on that BD for completion-time unmapping.

Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Joe Damato <joe@dama.to>
---
 include/net/tso.h |   8 +++
 net/core/tso.c    | 165 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 173 insertions(+)

diff --git a/include/net/tso.h b/include/net/tso.h
index 9a508e60ee19..dcb93c7fb917 100644
--- a/include/net/tso.h
+++ b/include/net/tso.h
@@ -65,4 +65,12 @@ struct tso_dma_map {
 	} frags[MAX_SKB_FRAGS];
 };
 
+int tso_dma_map_init(struct tso_dma_map *map, struct device *dev,
+		     const struct sk_buff *skb, unsigned int hdr_len);
+void tso_dma_map_cleanup(struct tso_dma_map *map);
+unsigned int tso_dma_map_count(const struct tso_dma_map *map, unsigned int len);
+bool tso_dma_map_next(struct tso_dma_map *map, dma_addr_t *addr,
+		      unsigned int *chunk_len, unsigned int *mapping_len,
+		      unsigned int seg_remaining);
+
 #endif	/* _TSO_H */
diff --git a/net/core/tso.c b/net/core/tso.c
index 6df997b9076e..48348ad94501 100644
--- a/net/core/tso.c
+++ b/net/core/tso.c
@@ -3,6 +3,7 @@
 #include <linux/if_vlan.h>
 #include <net/ip.h>
 #include <net/tso.h>
+#include <linux/dma-mapping.h>
 #include <linux/unaligned.h>
 
 void tso_build_hdr(const struct sk_buff *skb, char *hdr, struct tso_t *tso,
@@ -87,3 +88,167 @@ int tso_start(struct sk_buff *skb, struct tso_t *tso)
 	return hdr_len;
 }
 EXPORT_SYMBOL(tso_start);
+
+/**
+ * tso_dma_map_init - DMA-map GSO payload regions
+ * @map: map struct to initialize
+ * @dev: device for DMA mapping
+ * @skb: the GSO skb
+ * @hdr_len: per-segment header length in bytes
+ *
+ * DMA-maps the linear payload (after headers) and all frags.
+ * Positions the iterator at byte 0 of the payload.
+ *
+ * Returns 0 on success, -ENOMEM on DMA mapping failure (partial mappings
+ * are cleaned up internally).
+ */
+int tso_dma_map_init(struct tso_dma_map *map, struct device *dev,
+		     const struct sk_buff *skb, unsigned int hdr_len)
+{
+	unsigned int linear_len = skb_headlen(skb) - hdr_len;
+	unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+	int i;
+
+	map->dev = dev;
+	map->skb = skb;
+	map->hdr_len = hdr_len;
+	map->frag_idx = -1;
+	map->offset = 0;
+	map->linear_len = 0;
+	map->nr_frags = 0;
+
+	if (linear_len > 0) {
+		map->linear_dma = dma_map_single(dev, skb->data + hdr_len,
+						 linear_len, DMA_TO_DEVICE);
+		if (dma_mapping_error(dev, map->linear_dma))
+			return -ENOMEM;
+		map->linear_len = linear_len;
+	}
+
+	for (i = 0; i < nr_frags; i++) {
+		skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+		map->frags[i].len = skb_frag_size(frag);
+		map->frags[i].dma = skb_frag_dma_map(dev, frag, 0,
+						     map->frags[i].len,
+						     DMA_TO_DEVICE);
+		if (dma_mapping_error(dev, map->frags[i].dma)) {
+			tso_dma_map_cleanup(map);
+			return -ENOMEM;
+		}
+		map->nr_frags = i + 1;
+	}
+
+	if (linear_len == 0 && nr_frags > 0)
+		map->frag_idx = 0;
+
+	return 0;
+}
+EXPORT_SYMBOL(tso_dma_map_init);
+
+/**
+ * tso_dma_map_cleanup - unmap all DMA regions in a tso_dma_map
+ * @map: the map to clean up
+ *
+ * Unmaps linear payload and all mapped frags. Used on error paths.
+ * Success paths use the driver's completion path to handle unmapping.
+ */
+void tso_dma_map_cleanup(struct tso_dma_map *map)
+{
+	int i;
+
+	if (map->linear_len)
+		dma_unmap_single(map->dev, map->linear_dma, map->linear_len,
+				 DMA_TO_DEVICE);
+
+	for (i = 0; i < map->nr_frags; i++)
+		dma_unmap_page(map->dev, map->frags[i].dma, map->frags[i].len,
+			       DMA_TO_DEVICE);
+
+	map->linear_len = 0;
+	map->nr_frags = 0;
+}
+EXPORT_SYMBOL(tso_dma_map_cleanup);
+
+/**
+ * tso_dma_map_count - count descriptors for a payload range
+ * @map: the payload map
+ * @len: number of payload bytes in this segment
+ *
+ * Counts how many contiguous DMA region chunks the next @len bytes
+ * will span, without advancing the iterator. Uses frag sizes from
+ * the current position.
+ *
+ * Returns the number of descriptors needed for @len bytes of payload.
+ */
+unsigned int tso_dma_map_count(const struct tso_dma_map *map, unsigned int len)
+{
+	unsigned int offset = map->offset;
+	int idx = map->frag_idx;
+	unsigned int count = 0;
+
+	while (len > 0) {
+		unsigned int region_len, chunk;
+
+		if (idx == -1)
+			region_len = map->linear_len;
+		else
+			region_len = map->frags[idx].len;
+
+		chunk = min(len, region_len - offset);
+		len -= chunk;
+		count++;
+		offset = 0;
+		idx++;
+	}
+
+	return count;
+}
+EXPORT_SYMBOL(tso_dma_map_count);
+
+/**
+ * tso_dma_map_next - yield the next DMA address range
+ * @map: the payload map
+ * @addr: output DMA address
+ * @chunk_len: output chunk length
+ * @mapping_len: output full mapping length when this is the first BD of
+ *               a DMA mapping (driver should set dma_unmap_len to this),
+ *               or 0 when continuing a previous mapping
+ * @seg_remaining: bytes left in current segment
+ *
+ * Yields the next (dma_addr, chunk_len) pair and advances the iterator.
+ *
+ * Returns true if a chunk was yielded, false when @seg_remaining is 0.
+ */
+bool tso_dma_map_next(struct tso_dma_map *map, dma_addr_t *addr,
+		      unsigned int *chunk_len, unsigned int *mapping_len,
+		      unsigned int seg_remaining)
+{
+	unsigned int region_len, chunk;
+
+	if (!seg_remaining)
+		return false;
+
+	if (map->frag_idx == -1) {
+		region_len = map->linear_len;
+		chunk = min(seg_remaining, region_len - map->offset);
+		*addr = map->linear_dma + map->offset;
+		*mapping_len = (map->offset == 0) ? region_len : 0;
+	} else {
+		region_len = map->frags[map->frag_idx].len;
+		chunk = min(seg_remaining, region_len - map->offset);
+		*addr = map->frags[map->frag_idx].dma + map->offset;
+		*mapping_len = (map->offset == 0) ? region_len : 0;
+	}
+
+	*chunk_len = chunk;
+	map->offset += chunk;
+
+	if (map->offset >= region_len) {
+		map->frag_idx++;
+		map->offset = 0;
+	}
+
+	return true;
+}
+EXPORT_SYMBOL(tso_dma_map_next);
-- 
2.52.0


  parent reply	other threads:[~2026-03-10 21:22 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-10 21:21 [RFC net-next 00/10] Add TSO map-once DMA helpers and bnxt SW USO support Joe Damato
2026-03-10 21:21 ` [RFC net-next 01/10] net: tso: Introduce tso_dma_map Joe Damato
2026-03-10 21:21 ` Joe Damato [this message]
2026-03-10 21:21 ` [RFC net-next 03/10] net: bnxt: Export bnxt_xmit_get_cfa_action Joe Damato
2026-03-10 21:21 ` [RFC net-next 04/10] net: bnxt: Add a helper for tx_bd_ext Joe Damato
2026-03-10 21:21 ` [RFC net-next 05/10] net: bnxt: Use dma_unmap_len for TX completion unmapping Joe Damato
2026-03-10 21:21 ` [RFC net-next 06/10] net: bnxt: Add TX inline buffer infrastructure Joe Damato
2026-03-10 21:21 ` [RFC net-next 07/10] net: bnxt: Add boilerplate GSO code Joe Damato
2026-03-10 21:21 ` [RFC net-next 08/10] net: bnxt: Implement software USO Joe Damato
2026-03-10 21:21 ` [RFC net-next 09/10] net: bnxt: Add SW GSO completion and teardown support Joe Damato
2026-03-10 21:21 ` [RFC net-next 10/10] net: bnxt: Dispatch to SW USO Joe Damato
2026-03-10 22:04 ` [RFC net-next 00/10] Add TSO map-once DMA helpers and bnxt SW USO support Joe Damato

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260310212209.2263939-3-joe@dama.to \
    --to=joe@dama.to \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=michael.chan@broadcom.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pavan.chebbi@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox