netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata
@ 2025-10-19 12:45 Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head Jakub Sitnicki
                   ` (14 more replies)
  0 siblings, 15 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

This patch set continues our work [1] to allow BPF programs and user-space
applications to attach multiple bytes of metadata to packets via the
XDP/skb metadata area.

The focus of this patch set it to ensure that skb metadata remains intact
when packets pass through a chain of TC BPF programs that call helpers
which operate on skb head.

Currently, several helpers that either adjust the skb->data pointer or
reallocate skb->head do not preserve metadata at its expected location,
that is immediately in front of the MAC header. These are:

- bpf_skb_adjust_room
- bpf_skb_change_head
- bpf_skb_change_proto
- bpf_skb_change_tail
- bpf_skb_vlan_pop
- bpf_skb_vlan_push

In TC BPF context, metadata must be moved whenever skb->data changes to
keep the skb->data_meta pointer valid. I don't see any way around
it. Creative ideas how to avoid that would be very welcome.

We can patch the helpers in at least two different ways:

1. Integrate metadata move into header move

   Replace the existing memmove, which follows skb_push/pull, with a helper
   that moves both headers and metadata in a single call. This avoids an
   extra memmove but reduces transparency.

        skb_pull(skb, len);
-       memmove(skb->data, skb->data - len, n);
+       skb_postpull_data_move(skb, len, n);
        skb->mac_header += len;

        skb_push(skb, len)
-       memmove(skb->data, skb->data + len, n);
+       skb_postpush_data_move(skb, len, n);
        skb->mac_header -= len;

2. Move metadata separately

   Add a dedicated metadata move after the header move. This is more
   explicit but costs an additional memmove.

        skb_pull(skb, len);
        memmove(skb->data, skb->data - len, n);
+       skb_metadata_postpull_move(skb, len);
        skb->mac_header += len;

        skb_push(skb, len)
+       skb_metadata_postpush_move(skb, len);
        memmove(skb->data, skb->data + len, n);
        skb->mac_header -= len;

This patch set implements option (1), expecting that "you can have just one
memmove" will be the most obvious feedback, while readability is a,
somewhat subjective, matter of taste, which I don't claim to have ;-)

The structure of the patch set is as follows:

- patches 1-3 prepare ground for safe-proofing the BPF helpers
- patches 4-8 modify the BPF helpers to preserve skb metadata
- patches 9-10 prepare ground for verifying metadata after BPF helper calls
- patches 11-15 adapt and expand tests to cover the made changes

Thanks,
-jkbs

[1] https://lore.kernel.org/all/20250814-skb-metadata-thru-dynptr-v7-0-8a39e636e0fb@cloudflare.com/

---
Changes in v2:
- Tweak WARN_ON_ONCE check in skb_data_move() (patch 2)
- Convert all tests to verify skb metadata in BPF (patches 9-10)
- Add test coverage for modified BPF helpers (patches 12-15)
- Link to RFCv1: https://lore.kernel.org/r/20250929-skb-meta-rx-path-v1-0-de700a7ab1cb@cloudflare.com

---
Jakub Sitnicki (15):
      net: Preserve metadata on pskb_expand_head
      net: Helper to move packet data and metadata after skb_push/pull
      vlan: Make vlan_remove_tag return nothing
      bpf: Make bpf_skb_vlan_pop helper metadata-safe
      bpf: Make bpf_skb_vlan_push helper metadata-safe
      bpf: Make bpf_skb_adjust_room metadata-safe
      bpf: Make bpf_skb_change_proto helper metadata-safe
      bpf: Make bpf_skb_change_head helper metadata-safe
      selftests/bpf: Verify skb metadata in BPF instead of userspace
      selftests/bpf: Dump skb metadata on verification failure
      selftests/bpf: Expect unclone to preserve skb metadata
      selftests/bpf: Cover skb metadata access after vlan push/pop helper
      selftests/bpf: Cover skb metadata access after bpf_skb_adjust_room
      selftests/bpf: Cover skb metadata access after change_head/tail helper
      selftests/bpf: Cover skb metadata access after bpf_skb_change_proto

 include/linux/if_vlan.h                            |  13 +-
 include/linux/skbuff.h                             |  74 +++++
 net/core/filter.c                                  |  16 +-
 net/core/skbuff.c                                  |   2 -
 .../bpf/prog_tests/xdp_context_test_run.c          | 127 +++++---
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 358 +++++++++++++++------
 6 files changed, 434 insertions(+), 156 deletions(-)


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-24  0:51   ` Jakub Kicinski
  2025-10-19 12:45 ` [PATCH bpf-next v2 02/15] net: Helper to move packet data and metadata after skb_push/pull Jakub Sitnicki
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

pskb_expand_head() copies headroom, including skb metadata, into the newly
allocated head, but then clears the metadata. As a result, metadata is lost
when BPF helpers trigger an skb head reallocation.

Let the skb metadata remain in the newly created copy of head.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 net/core/skbuff.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 6be01454f262..6e45a40e5966 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -2289,8 +2289,6 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
 	skb->nohdr    = 0;
 	atomic_set(&skb_shinfo(skb)->dataref, 1);
 
-	skb_metadata_clear(skb);
-
 	/* It is not generally safe to change skb->truesize.
 	 * For the moment, we really care of rx path, or
 	 * when skb is orphaned (not attached to a socket).

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 02/15] net: Helper to move packet data and metadata after skb_push/pull
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 03/15] vlan: Make vlan_remove_tag return nothing Jakub Sitnicki
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Lay groundwork for fixing BPF helpers available to TC(X) programs.

When skb_push() or skb_pull() is called in a TC(X) ingress BPF program, the
skb metadata must be kept in front of the MAC header. Otherwise, BPF
programs using the __sk_buff->data_meta pseudo-pointer lose access to it.

Introduce a helper that moves both metadata and a specified number of
packet data bytes together, suitable as a drop-in replacement for
memmove().

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 include/linux/skbuff.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index fb3fec9affaa..1a0c9fbbbb92 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -4561,6 +4561,80 @@ static inline void skb_metadata_clear(struct sk_buff *skb)
 	skb_metadata_set(skb, 0);
 }
 
+/**
+ * skb_data_move - Move packet data and metadata after skb_push() or skb_pull().
+ * @skb: packet to operate on
+ * @len: number of bytes pushed or pulled from &sk_buff->data
+ * @n: number of bytes to memmove() from pre-push/pull &sk_buff->data
+ *
+ * Moves both packet data (@n bytes) and metadata. Assumes metadata is located
+ * immediately before &sk_buff->data prior to the push/pull, and that sufficient
+ * headroom exists to hold it after an skb_push(). Otherwise, metadata is
+ * cleared and a one-time warning is issued.
+ *
+ * Use skb_postpull_data_move() or skb_postpush_data_move() instead of calling
+ * this helper directly.
+ */
+static inline void skb_data_move(struct sk_buff *skb, const int len,
+				 const unsigned int n)
+{
+	const u8 meta_len = skb_metadata_len(skb);
+	u8 *meta, *meta_end;
+
+	if (!len || (!n && !meta_len))
+		return;
+
+	if (!meta_len)
+		goto no_metadata;
+
+	meta_end = skb_metadata_end(skb);
+	meta = meta_end - meta_len;
+
+	if (WARN_ON_ONCE(meta_end + len != skb->data ||
+			 meta_len > skb_headroom(skb))) {
+		skb_metadata_clear(skb);
+		goto no_metadata;
+	}
+
+	memmove(meta + len, meta, meta_len + n);
+	return;
+
+no_metadata:
+	memmove(skb->data, skb->data - len, n);
+}
+
+/**
+ * skb_postpull_data_move - Move packet data and metadata after skb_pull().
+ * @skb: packet to operate on
+ * @len: number of bytes pulled from &sk_buff->data
+ * @n: number of bytes to memmove() from pre-pull &sk_buff->data
+ *
+ * See skb_data_move() for details.
+ */
+static inline void skb_postpull_data_move(struct sk_buff *skb,
+					  const unsigned int len,
+					  const unsigned int n)
+{
+	DEBUG_NET_WARN_ON_ONCE(len > INT_MAX);
+	skb_data_move(skb, len, n);
+}
+
+/**
+ * skb_postpush_data_move - Move packet data and metadata after skb_push().
+ * @skb: packet to operate on
+ * @len: number of bytes pushed onto &sk_buff->data
+ * @n: number of bytes to memmove() from pre-push &sk_buff->data
+ *
+ * See skb_data_move() for details.
+ */
+static inline void skb_postpush_data_move(struct sk_buff *skb,
+					  const unsigned int len,
+					  const unsigned int n)
+{
+	DEBUG_NET_WARN_ON_ONCE(len > INT_MAX);
+	skb_data_move(skb, -len, n);
+}
+
 struct sk_buff *skb_clone_sk(struct sk_buff *skb);
 
 #ifdef CONFIG_NETWORK_PHY_TIMESTAMPING

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 03/15] vlan: Make vlan_remove_tag return nothing
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 02/15] net: Helper to move packet data and metadata after skb_push/pull Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 04/15] bpf: Make bpf_skb_vlan_pop helper metadata-safe Jakub Sitnicki
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

All callers ignore the return value.

Prepare to reorder memmove() after skb_pull() which is a common pattern.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 include/linux/if_vlan.h | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
index 15e01935d3fa..afa5cc61a0fa 100644
--- a/include/linux/if_vlan.h
+++ b/include/linux/if_vlan.h
@@ -731,10 +731,8 @@ static inline void vlan_set_encap_proto(struct sk_buff *skb,
  *
  * Expects the skb to contain a VLAN tag in the payload, and to have skb->data
  * pointing at the MAC header.
- *
- * Returns: a new pointer to skb->data, or NULL on failure to pull.
  */
-static inline void *vlan_remove_tag(struct sk_buff *skb, u16 *vlan_tci)
+static inline void vlan_remove_tag(struct sk_buff *skb, u16 *vlan_tci)
 {
 	struct vlan_hdr *vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN);
 
@@ -742,7 +740,7 @@ static inline void *vlan_remove_tag(struct sk_buff *skb, u16 *vlan_tci)
 
 	memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN);
 	vlan_set_encap_proto(skb, vhdr);
-	return __skb_pull(skb, VLAN_HLEN);
+	__skb_pull(skb, VLAN_HLEN);
 }
 
 /**

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 04/15] bpf: Make bpf_skb_vlan_pop helper metadata-safe
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (2 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 03/15] vlan: Make vlan_remove_tag return nothing Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 05/15] bpf: Make bpf_skb_vlan_push " Jakub Sitnicki
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Use the metadata-aware helper to move packet bytes after skb_pull(),
ensuring metadata remains valid after calling the BPF helper.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 include/linux/if_vlan.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
index afa5cc61a0fa..4ecc2509b0d4 100644
--- a/include/linux/if_vlan.h
+++ b/include/linux/if_vlan.h
@@ -738,9 +738,9 @@ static inline void vlan_remove_tag(struct sk_buff *skb, u16 *vlan_tci)
 
 	*vlan_tci = ntohs(vhdr->h_vlan_TCI);
 
-	memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN);
 	vlan_set_encap_proto(skb, vhdr);
 	__skb_pull(skb, VLAN_HLEN);
+	skb_postpull_data_move(skb, VLAN_HLEN, 2 * ETH_ALEN);
 }
 
 /**

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 05/15] bpf: Make bpf_skb_vlan_push helper metadata-safe
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (3 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 04/15] bpf: Make bpf_skb_vlan_pop helper metadata-safe Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 06/15] bpf: Make bpf_skb_adjust_room metadata-safe Jakub Sitnicki
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Use the metadata-aware helper to move packet bytes after skb_push(),
ensuring metadata remains valid after calling the BPF helper.

Also, take care to reserve sufficient headroom for metadata to fit.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 include/linux/if_vlan.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
index 4ecc2509b0d4..f7f34eb15e06 100644
--- a/include/linux/if_vlan.h
+++ b/include/linux/if_vlan.h
@@ -355,16 +355,17 @@ static inline int __vlan_insert_inner_tag(struct sk_buff *skb,
 					  __be16 vlan_proto, u16 vlan_tci,
 					  unsigned int mac_len)
 {
+	const u8 meta_len = mac_len > ETH_TLEN ? skb_metadata_len(skb) : 0;
 	struct vlan_ethhdr *veth;
 
-	if (skb_cow_head(skb, VLAN_HLEN) < 0)
+	if (skb_cow_head(skb, meta_len + VLAN_HLEN) < 0)
 		return -ENOMEM;
 
 	skb_push(skb, VLAN_HLEN);
 
 	/* Move the mac header sans proto to the beginning of the new header. */
 	if (likely(mac_len > ETH_TLEN))
-		memmove(skb->data, skb->data + VLAN_HLEN, mac_len - ETH_TLEN);
+		skb_postpush_data_move(skb, VLAN_HLEN, mac_len - ETH_TLEN);
 	if (skb_mac_header_was_set(skb))
 		skb->mac_header -= VLAN_HLEN;
 

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 06/15] bpf: Make bpf_skb_adjust_room metadata-safe
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (4 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 05/15] bpf: Make bpf_skb_vlan_push " Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 07/15] bpf: Make bpf_skb_change_proto helper metadata-safe Jakub Sitnicki
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

bpf_skb_adjust_room() may push or pull bytes from skb->data. In both cases,
skb metadata must be moved accordingly to stay accessible.

Replace existing memmove() calls, which only move payload, with a helper
that also handles metadata. Reserve enough space for metadata to fit after
skb_push.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 net/core/filter.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 76628df1fc82..5e1a52694423 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3253,11 +3253,11 @@ static void bpf_skb_change_protocol(struct sk_buff *skb, u16 proto)
 
 static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)
 {
-	/* Caller already did skb_cow() with len as headroom,
+	/* Caller already did skb_cow() with meta_len+len as headroom,
 	 * so no need to do it here.
 	 */
 	skb_push(skb, len);
-	memmove(skb->data, skb->data + len, off);
+	skb_postpush_data_move(skb, len, off);
 	memset(skb->data + off, 0, len);
 
 	/* No skb_postpush_rcsum(skb, skb->data + off, len)
@@ -3281,7 +3281,7 @@ static int bpf_skb_generic_pop(struct sk_buff *skb, u32 off, u32 len)
 	old_data = skb->data;
 	__skb_pull(skb, len);
 	skb_postpull_rcsum(skb, old_data + off, len);
-	memmove(skb->data, old_data, off);
+	skb_postpull_data_move(skb, len, off);
 
 	return 0;
 }
@@ -3489,6 +3489,7 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
 	u8 inner_mac_len = flags >> BPF_ADJ_ROOM_ENCAP_L2_SHIFT;
 	bool encap = flags & BPF_F_ADJ_ROOM_ENCAP_L3_MASK;
 	u16 mac_len = 0, inner_net = 0, inner_trans = 0;
+	const u8 meta_len = skb_metadata_len(skb);
 	unsigned int gso_type = SKB_GSO_DODGY;
 	int ret;
 
@@ -3499,7 +3500,7 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 off, u32 len_diff,
 			return -ENOTSUPP;
 	}
 
-	ret = skb_cow_head(skb, len_diff);
+	ret = skb_cow_head(skb, meta_len + len_diff);
 	if (unlikely(ret < 0))
 		return ret;
 

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 07/15] bpf: Make bpf_skb_change_proto helper metadata-safe
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (5 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 06/15] bpf: Make bpf_skb_adjust_room metadata-safe Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 08/15] bpf: Make bpf_skb_change_head " Jakub Sitnicki
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

bpf_skb_change_proto reuses the same headroom operations as
bpf_skb_adjust_room, already updated to handle metadata safely.

The remaining step is to ensure that there is sufficient headroom to
accommodate metadata on skb_push().

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 net/core/filter.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 5e1a52694423..52e496a4ff27 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3326,10 +3326,11 @@ static int bpf_skb_net_hdr_pop(struct sk_buff *skb, u32 off, u32 len)
 static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
 {
 	const u32 len_diff = sizeof(struct ipv6hdr) - sizeof(struct iphdr);
+	const u8 meta_len = skb_metadata_len(skb);
 	u32 off = skb_mac_header_len(skb);
 	int ret;
 
-	ret = skb_cow(skb, len_diff);
+	ret = skb_cow(skb, meta_len + len_diff);
 	if (unlikely(ret < 0))
 		return ret;
 

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 08/15] bpf: Make bpf_skb_change_head helper metadata-safe
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (6 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 07/15] bpf: Make bpf_skb_change_proto helper metadata-safe Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 09/15] selftests/bpf: Verify skb metadata in BPF instead of userspace Jakub Sitnicki
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Although bpf_skb_change_head() doesn't move packet data after skb_push(),
skb metadata still needs to be relocated. Use the dedicated helper to
handle it.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 net/core/filter.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 52e496a4ff27..5ad2af9441a3 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3875,6 +3875,7 @@ static const struct bpf_func_proto sk_skb_change_tail_proto = {
 static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
 					u64 flags)
 {
+	const u8 meta_len = skb_metadata_len(skb);
 	u32 max_len = BPF_SKB_MAX_LEN;
 	u32 new_len = skb->len + head_room;
 	int ret;
@@ -3883,7 +3884,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
 		     new_len < skb->len))
 		return -EINVAL;
 
-	ret = skb_cow(skb, head_room);
+	ret = skb_cow(skb, meta_len + head_room);
 	if (likely(!ret)) {
 		/* Idea for this helper is that we currently only
 		 * allow to expand on mac header. This means that
@@ -3895,6 +3896,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
 		 * for redirection into L2 device.
 		 */
 		__skb_push(skb, head_room);
+		skb_postpush_data_move(skb, head_room, 0);
 		memset(skb->data, 0, head_room);
 		skb_reset_mac_header(skb);
 		skb_reset_mac_len(skb);

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 09/15] selftests/bpf: Verify skb metadata in BPF instead of userspace
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (7 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 08/15] bpf: Make bpf_skb_change_head " Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure Jakub Sitnicki
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Move metadata verification into the BPF TC programs. Previously,
userspace read metadata from a map and verified it once at test end.

Now TC programs compare metadata directly using __builtin_memcmp() and
set a test_pass flag. This enables verification at multiple points during
test execution rather than a single final check.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          | 52 ++++---------
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 88 +++++++++++-----------
 2 files changed, 57 insertions(+), 83 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index 178292d1251a..93a1fbe6a4fd 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -171,33 +171,6 @@ static int write_test_packet(int tap_fd)
 	return 0;
 }
 
-static void assert_test_result(const struct bpf_map *result_map)
-{
-	int err;
-	__u32 map_key = 0;
-	__u8 map_value[TEST_PAYLOAD_LEN];
-
-	err = bpf_map__lookup_elem(result_map, &map_key, sizeof(map_key),
-				   &map_value, TEST_PAYLOAD_LEN, BPF_ANY);
-	if (!ASSERT_OK(err, "lookup test_result"))
-		return;
-
-	ASSERT_MEMEQ(&map_value, &test_payload, TEST_PAYLOAD_LEN,
-		     "test_result map contains test payload");
-}
-
-static bool clear_test_result(struct bpf_map *result_map)
-{
-	const __u8 v[sizeof(test_payload)] = {};
-	const __u32 k = 0;
-	int err;
-
-	err = bpf_map__update_elem(result_map, &k, sizeof(k), v, sizeof(v), BPF_ANY);
-	ASSERT_OK(err, "update test_result");
-
-	return err == 0;
-}
-
 void test_xdp_context_veth(void)
 {
 	LIBBPF_OPTS(bpf_tc_hook, tc_hook, .attach_point = BPF_TC_INGRESS);
@@ -270,11 +243,13 @@ void test_xdp_context_veth(void)
 	if (!ASSERT_GE(tx_ifindex, 0, "if_nametoindex tx"))
 		goto close;
 
+	skel->bss->test_pass = false;
+
 	ret = send_test_packet(tx_ifindex);
 	if (!ASSERT_OK(ret, "send_test_packet"))
 		goto close;
 
-	assert_test_result(skel->maps.test_result);
+	ASSERT_TRUE(skel->bss->test_pass, "test_pass");
 
 close:
 	close_netns(nstoken);
@@ -286,7 +261,7 @@ void test_xdp_context_veth(void)
 static void test_tuntap(struct bpf_program *xdp_prog,
 			struct bpf_program *tc_prio_1_prog,
 			struct bpf_program *tc_prio_2_prog,
-			struct bpf_map *result_map)
+			bool *test_pass)
 {
 	LIBBPF_OPTS(bpf_tc_hook, tc_hook, .attach_point = BPF_TC_INGRESS);
 	LIBBPF_OPTS(bpf_tc_opts, tc_opts, .handle = 1, .priority = 1);
@@ -295,8 +270,7 @@ static void test_tuntap(struct bpf_program *xdp_prog,
 	int tap_ifindex;
 	int ret;
 
-	if (!clear_test_result(result_map))
-		return;
+	*test_pass = false;
 
 	ns = netns_new(TAP_NETNS, true);
 	if (!ASSERT_OK_PTR(ns, "create and open ns"))
@@ -340,7 +314,7 @@ static void test_tuntap(struct bpf_program *xdp_prog,
 	if (!ASSERT_OK(ret, "write_test_packet"))
 		goto close;
 
-	assert_test_result(result_map);
+	ASSERT_TRUE(*test_pass, "test_pass");
 
 close:
 	if (tap_fd >= 0)
@@ -431,37 +405,37 @@ void test_xdp_context_tuntap(void)
 		test_tuntap(skel->progs.ing_xdp,
 			    skel->progs.ing_cls,
 			    NULL, /* tc prio 2 */
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("dynptr_read"))
 		test_tuntap(skel->progs.ing_xdp,
 			    skel->progs.ing_cls_dynptr_read,
 			    NULL, /* tc prio 2 */
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("dynptr_slice"))
 		test_tuntap(skel->progs.ing_xdp,
 			    skel->progs.ing_cls_dynptr_slice,
 			    NULL, /* tc prio 2 */
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("dynptr_write"))
 		test_tuntap(skel->progs.ing_xdp_zalloc_meta,
 			    skel->progs.ing_cls_dynptr_write,
 			    skel->progs.ing_cls_dynptr_read,
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("dynptr_slice_rdwr"))
 		test_tuntap(skel->progs.ing_xdp_zalloc_meta,
 			    skel->progs.ing_cls_dynptr_slice_rdwr,
 			    skel->progs.ing_cls_dynptr_slice,
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("dynptr_offset"))
 		test_tuntap(skel->progs.ing_xdp_zalloc_meta,
 			    skel->progs.ing_cls_dynptr_offset_wr,
 			    skel->progs.ing_cls_dynptr_offset_rd,
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("dynptr_offset_oob"))
 		test_tuntap(skel->progs.ing_xdp,
 			    skel->progs.ing_cls_dynptr_offset_oob,
 			    skel->progs.ing_cls,
-			    skel->maps.test_result);
+			    &skel->bss->test_pass);
 	if (test__start_subtest("clone_data_meta_empty_on_data_write"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
 				   skel->progs.clone_data_meta_empty_on_data_write,
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index d79cb74b571e..11288b20f56c 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -11,37 +11,36 @@
 
 #define ctx_ptr(ctx, mem) (void *)(unsigned long)ctx->mem
 
-/* Demonstrates how metadata can be passed from an XDP program to a TC program
- * using bpf_xdp_adjust_meta.
- * For the sake of testing the metadata support in drivers, the XDP program uses
- * a fixed-size payload after the Ethernet header as metadata. The TC program
- * copies the metadata it receives into a map so it can be checked from
- * userspace.
+/* Demonstrate passing metadata from XDP to TC using bpf_xdp_adjust_meta.
+ *
+ * The XDP program extracts a fixed-size payload following the Ethernet header
+ * and stores it as packet metadata to test the driver's metadata support. The
+ * TC program then verifies if the passed metadata is correct.
  */
 
-struct {
-	__uint(type, BPF_MAP_TYPE_ARRAY);
-	__uint(max_entries, 1);
-	__type(key, __u32);
-	__uint(value_size, META_SIZE);
-} test_result SEC(".maps");
-
 bool test_pass;
 
+static const __u8 meta_want[META_SIZE] = {
+	0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+	0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
+	0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28,
+	0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38,
+};
+
 SEC("tc")
 int ing_cls(struct __sk_buff *ctx)
 {
-	__u8 *data, *data_meta;
-	__u32 key = 0;
-
-	data_meta = ctx_ptr(ctx, data_meta);
-	data      = ctx_ptr(ctx, data);
+	__u8 *meta_have = ctx_ptr(ctx, data_meta);
+	__u8 *data = ctx_ptr(ctx, data);
 
-	if (data_meta + META_SIZE > data)
-		return TC_ACT_SHOT;
+	if (meta_have + META_SIZE > data)
+		goto out;
 
-	bpf_map_update_elem(&test_result, &key, data_meta, BPF_ANY);
+	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+		goto out;
 
+	test_pass = true;
+out:
 	return TC_ACT_SHOT;
 }
 
@@ -49,17 +48,17 @@ int ing_cls(struct __sk_buff *ctx)
 SEC("tc")
 int ing_cls_dynptr_read(struct __sk_buff *ctx)
 {
+	__u8 meta_have[META_SIZE];
 	struct bpf_dynptr meta;
-	const __u32 zero = 0;
-	__u8 *dst;
-
-	dst = bpf_map_lookup_elem(&test_result, &zero);
-	if (!dst)
-		return TC_ACT_SHOT;
 
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	bpf_dynptr_read(dst, META_SIZE, &meta, 0, 0);
+	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
+
+	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+		goto out;
 
+	test_pass = true;
+out:
 	return TC_ACT_SHOT;
 }
 
@@ -86,20 +85,18 @@ SEC("tc")
 int ing_cls_dynptr_slice(struct __sk_buff *ctx)
 {
 	struct bpf_dynptr meta;
-	const __u32 zero = 0;
-	__u8 *dst, *src;
-
-	dst = bpf_map_lookup_elem(&test_result, &zero);
-	if (!dst)
-		return TC_ACT_SHOT;
+	__u8 *meta_have;
 
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	src = bpf_dynptr_slice(&meta, 0, NULL, META_SIZE);
-	if (!src)
-		return TC_ACT_SHOT;
+	meta_have = bpf_dynptr_slice(&meta, 0, NULL, META_SIZE);
+	if (!meta_have)
+		goto out;
 
-	__builtin_memcpy(dst, src, META_SIZE);
+	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+		goto out;
 
+	test_pass = true;
+out:
 	return TC_ACT_SHOT;
 }
 
@@ -129,14 +126,12 @@ int ing_cls_dynptr_slice_rdwr(struct __sk_buff *ctx)
 SEC("tc")
 int ing_cls_dynptr_offset_rd(struct __sk_buff *ctx)
 {
-	struct bpf_dynptr meta;
 	const __u32 chunk_len = META_SIZE / 4;
-	const __u32 zero = 0;
+	__u8 meta_have[META_SIZE];
+	struct bpf_dynptr meta;
 	__u8 *dst, *src;
 
-	dst = bpf_map_lookup_elem(&test_result, &zero);
-	if (!dst)
-		return TC_ACT_SHOT;
+	dst = meta_have;
 
 	/* 1. Regular read */
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
@@ -155,9 +150,14 @@ int ing_cls_dynptr_offset_rd(struct __sk_buff *ctx)
 	/* 4. Read from a slice starting at an offset */
 	src = bpf_dynptr_slice(&meta, 2 * chunk_len, NULL, chunk_len);
 	if (!src)
-		return TC_ACT_SHOT;
+		goto out;
 	__builtin_memcpy(dst, src, chunk_len);
 
+	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+		goto out;
+
+	test_pass = true;
+out:
 	return TC_ACT_SHOT;
 }
 

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (8 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 09/15] selftests/bpf: Verify skb metadata in BPF instead of userspace Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-22 23:30   ` Martin KaFai Lau
  2025-10-19 12:45 ` [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata Jakub Sitnicki
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Add diagnostic output when metadata verification fails to help with
troubleshooting test failures. Introduce a check_metadata() helper that
prints both expected and received metadata to the BPF program's stderr
stream on mismatch. The userspace test reads and dumps this stream on
failure.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          | 28 +++++++++++++++++---
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 30 +++++++++++++++++++---
 2 files changed, 51 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index 93a1fbe6a4fd..a3de37942fa4 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -171,6 +171,25 @@ static int write_test_packet(int tap_fd)
 	return 0;
 }
 
+enum {
+	BPF_STDOUT = 1,
+	BPF_STDERR = 2,
+};
+
+static void dump_err_stream(const struct bpf_program *prog)
+{
+	char buf[512];
+	int ret;
+
+	ret = 0;
+	do {
+		ret = bpf_prog_stream_read(bpf_program__fd(prog), BPF_STDERR,
+					   buf, sizeof(buf), NULL);
+		if (ret > 0)
+			fwrite(buf, sizeof(buf[0]), ret, stderr);
+	} while (ret > 0);
+}
+
 void test_xdp_context_veth(void)
 {
 	LIBBPF_OPTS(bpf_tc_hook, tc_hook, .attach_point = BPF_TC_INGRESS);
@@ -249,7 +268,8 @@ void test_xdp_context_veth(void)
 	if (!ASSERT_OK(ret, "send_test_packet"))
 		goto close;
 
-	ASSERT_TRUE(skel->bss->test_pass, "test_pass");
+	if (!ASSERT_TRUE(skel->bss->test_pass, "test_pass"))
+		dump_err_stream(tc_prog);
 
 close:
 	close_netns(nstoken);
@@ -314,7 +334,8 @@ static void test_tuntap(struct bpf_program *xdp_prog,
 	if (!ASSERT_OK(ret, "write_test_packet"))
 		goto close;
 
-	ASSERT_TRUE(*test_pass, "test_pass");
+	if (!ASSERT_TRUE(*test_pass, "test_pass"))
+		dump_err_stream(tc_prio_2_prog ? : tc_prio_1_prog);
 
 close:
 	if (tap_fd >= 0)
@@ -385,7 +406,8 @@ static void test_tuntap_mirred(struct bpf_program *xdp_prog,
 	if (!ASSERT_OK(ret, "write_test_packet"))
 		goto close;
 
-	ASSERT_TRUE(*test_pass, "test_pass");
+	if (!ASSERT_TRUE(*test_pass, "test_pass"))
+		dump_err_stream(tc_prog);
 
 close:
 	if (tap_fd >= 0)
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index 11288b20f56c..33480bcb8ec1 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -18,6 +18,11 @@
  * TC program then verifies if the passed metadata is correct.
  */
 
+enum {
+	BPF_STDOUT = 1,
+	BPF_STDERR = 2,
+};
+
 bool test_pass;
 
 static const __u8 meta_want[META_SIZE] = {
@@ -27,6 +32,23 @@ static const __u8 meta_want[META_SIZE] = {
 	0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38,
 };
 
+static bool check_metadata(const char *file, int line, __u8 *meta_have)
+{
+	if (!__builtin_memcmp(meta_have, meta_want, META_SIZE))
+		return true;
+
+	bpf_stream_printk(BPF_STDERR,
+			  "FAIL:%s:%d: metadata mismatch\n"
+			  "  have:\n    %pI6\n    %pI6\n"
+			  "  want:\n    %pI6\n    %pI6\n",
+			  file, line,
+			  &meta_have[0x00], &meta_have[0x10],
+			  &meta_want[0x00], &meta_have[0x10]);
+	return false;
+}
+
+#define check_metadata(meta_have) check_metadata(__FILE__, __LINE__, meta_have)
+
 SEC("tc")
 int ing_cls(struct __sk_buff *ctx)
 {
@@ -36,7 +58,7 @@ int ing_cls(struct __sk_buff *ctx)
 	if (meta_have + META_SIZE > data)
 		goto out;
 
-	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+	if (!check_metadata(meta_have))
 		goto out;
 
 	test_pass = true;
@@ -54,7 +76,7 @@ int ing_cls_dynptr_read(struct __sk_buff *ctx)
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
 	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
 
-	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+	if (!check_metadata(meta_have))
 		goto out;
 
 	test_pass = true;
@@ -92,7 +114,7 @@ int ing_cls_dynptr_slice(struct __sk_buff *ctx)
 	if (!meta_have)
 		goto out;
 
-	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+	if (!check_metadata(meta_have))
 		goto out;
 
 	test_pass = true;
@@ -153,7 +175,7 @@ int ing_cls_dynptr_offset_rd(struct __sk_buff *ctx)
 		goto out;
 	__builtin_memcpy(dst, src, chunk_len);
 
-	if (__builtin_memcmp(meta_want, meta_have, META_SIZE))
+	if (!check_metadata(meta_have))
 		goto out;
 
 	test_pass = true;

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (9 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-22 23:12   ` Martin KaFai Lau
  2025-10-19 12:45 ` [PATCH bpf-next v2 12/15] selftests/bpf: Cover skb metadata access after vlan push/pop helper Jakub Sitnicki
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Since pskb_expand_head() no longer clears metadata on unclone, update tests
for cloned packets to expect metadata to remain intact.

Also simplify the clone_dynptr_kept_on_{data,meta}_slice_write tests.
Creating an r/w dynptr slice is sufficient to trigger an unclone in the
prologue, so remove the extraneous writes to the data/meta slice.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          | 20 ++---
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 87 ++++++++++++----------
 2 files changed, 59 insertions(+), 48 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index a3de37942fa4..df6248dbaae8 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -458,25 +458,25 @@ void test_xdp_context_tuntap(void)
 			    skel->progs.ing_cls_dynptr_offset_oob,
 			    skel->progs.ing_cls,
 			    &skel->bss->test_pass);
-	if (test__start_subtest("clone_data_meta_empty_on_data_write"))
+	if (test__start_subtest("clone_data_meta_kept_on_data_write"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
-				   skel->progs.clone_data_meta_empty_on_data_write,
+				   skel->progs.clone_data_meta_kept_on_data_write,
 				   &skel->bss->test_pass);
-	if (test__start_subtest("clone_data_meta_empty_on_meta_write"))
+	if (test__start_subtest("clone_data_meta_kept_on_meta_write"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
-				   skel->progs.clone_data_meta_empty_on_meta_write,
+				   skel->progs.clone_data_meta_kept_on_meta_write,
 				   &skel->bss->test_pass);
-	if (test__start_subtest("clone_dynptr_empty_on_data_slice_write"))
+	if (test__start_subtest("clone_dynptr_kept_on_data_slice_write"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
-				   skel->progs.clone_dynptr_empty_on_data_slice_write,
+				   skel->progs.clone_dynptr_kept_on_data_slice_write,
 				   &skel->bss->test_pass);
-	if (test__start_subtest("clone_dynptr_empty_on_meta_slice_write"))
+	if (test__start_subtest("clone_dynptr_kept_on_meta_slice_write"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
-				   skel->progs.clone_dynptr_empty_on_meta_slice_write,
+				   skel->progs.clone_dynptr_kept_on_meta_slice_write,
 				   &skel->bss->test_pass);
-	if (test__start_subtest("clone_dynptr_rdonly_before_data_dynptr_write"))
+	if (test__start_subtest("clone_dynptr_rdonly_before_data_dynptr_write_then_rw"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
-				   skel->progs.clone_dynptr_rdonly_before_data_dynptr_write,
+				   skel->progs.clone_dynptr_rdonly_before_data_dynptr_write_then_rw,
 				   &skel->bss->test_pass);
 	if (test__start_subtest("clone_dynptr_rdonly_before_meta_dynptr_write"))
 		test_tuntap_mirred(skel->progs.ing_xdp,
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index 33480bcb8ec1..dba76f84c0c5 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -326,12 +326,13 @@ int ing_xdp(struct xdp_md *ctx)
 }
 
 /*
- * Check that skb->data_meta..skb->data is empty if prog writes to packet
+ * Check that skb->data_meta..skb->data is kept in tact if prog writes to packet
  * _payload_ using packet pointers. Applies only to cloned skbs.
  */
 SEC("tc")
-int clone_data_meta_empty_on_data_write(struct __sk_buff *ctx)
+int clone_data_meta_kept_on_data_write(struct __sk_buff *ctx)
 {
+	__u8 *meta_have = ctx_ptr(ctx, data_meta);
 	struct ethhdr *eth = ctx_ptr(ctx, data);
 
 	if (eth + 1 > ctx_ptr(ctx, data_end))
@@ -340,8 +341,10 @@ int clone_data_meta_empty_on_data_write(struct __sk_buff *ctx)
 	if (eth->h_proto != 0)
 		goto out;
 
-	/* Expect no metadata */
-	if (ctx->data_meta != ctx->data)
+	if (meta_have + META_SIZE > eth)
+		goto out;
+
+	if (!check_metadata(meta_have))
 		goto out;
 
 	/* Packet write to trigger unclone in prologue */
@@ -353,14 +356,14 @@ int clone_data_meta_empty_on_data_write(struct __sk_buff *ctx)
 }
 
 /*
- * Check that skb->data_meta..skb->data is empty if prog writes to packet
+ * Check that skb->data_meta..skb->data is kept in tact if prog writes to packet
  * _metadata_ using packet pointers. Applies only to cloned skbs.
  */
 SEC("tc")
-int clone_data_meta_empty_on_meta_write(struct __sk_buff *ctx)
+int clone_data_meta_kept_on_meta_write(struct __sk_buff *ctx)
 {
+	__u8 *meta_have = ctx_ptr(ctx, data_meta);
 	struct ethhdr *eth = ctx_ptr(ctx, data);
-	__u8 *md = ctx_ptr(ctx, data_meta);
 
 	if (eth + 1 > ctx_ptr(ctx, data_end))
 		goto out;
@@ -368,25 +371,29 @@ int clone_data_meta_empty_on_meta_write(struct __sk_buff *ctx)
 	if (eth->h_proto != 0)
 		goto out;
 
-	if (md + 1 > ctx_ptr(ctx, data)) {
-		/* Expect no metadata */
-		test_pass = true;
-	} else {
-		/* Metadata write to trigger unclone in prologue */
-		*md = 42;
-	}
+	if (meta_have + META_SIZE > eth)
+		goto out;
+
+	if (!check_metadata(meta_have))
+		goto out;
+
+	/* Metadata write to trigger unclone in prologue */
+	*meta_have = 42;
+
+	test_pass = true;
 out:
 	return TC_ACT_SHOT;
 }
 
 /*
- * Check that skb_meta dynptr is writable but empty if prog writes to packet
- * _payload_ using a dynptr slice. Applies only to cloned skbs.
+ * Check that skb_meta dynptr is writable and was kept in tact if prog creates a
+ * r/w slice to packet _payload_. Applies only to cloned skbs.
  */
 SEC("tc")
-int clone_dynptr_empty_on_data_slice_write(struct __sk_buff *ctx)
+int clone_dynptr_kept_on_data_slice_write(struct __sk_buff *ctx)
 {
 	struct bpf_dynptr data, meta;
+	__u8 meta_have[META_SIZE];
 	struct ethhdr *eth;
 
 	bpf_dynptr_from_skb(ctx, 0, &data);
@@ -397,29 +404,26 @@ int clone_dynptr_empty_on_data_slice_write(struct __sk_buff *ctx)
 	if (eth->h_proto != 0)
 		goto out;
 
-	/* Expect no metadata */
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	if (bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) > 0)
+	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
+	if (!check_metadata(meta_have))
 		goto out;
 
-	/* Packet write to trigger unclone in prologue */
-	eth->h_proto = 42;
-
 	test_pass = true;
 out:
 	return TC_ACT_SHOT;
 }
 
 /*
- * Check that skb_meta dynptr is writable but empty if prog writes to packet
- * _metadata_ using a dynptr slice. Applies only to cloned skbs.
+ * Check that skb_meta dynptr is writable and was kept in tact if prog creates
+ * an r/w slice to packet _metadata_. Applies only to cloned skbs.
  */
 SEC("tc")
-int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
+int clone_dynptr_kept_on_meta_slice_write(struct __sk_buff *ctx)
 {
 	struct bpf_dynptr data, meta;
 	const struct ethhdr *eth;
-	__u8 *md;
+	__u8 *meta_have;
 
 	bpf_dynptr_from_skb(ctx, 0, &data);
 	eth = bpf_dynptr_slice(&data, 0, NULL, sizeof(*eth));
@@ -429,16 +433,13 @@ int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
 	if (eth->h_proto != 0)
 		goto out;
 
-	/* Expect no metadata */
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	if (bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) > 0)
+	meta_have = bpf_dynptr_slice_rdwr(&meta, 0, NULL, META_SIZE);
+	if (!meta_have)
 		goto out;
 
-	/* Metadata write to trigger unclone in prologue */
-	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	md = bpf_dynptr_slice_rdwr(&meta, 0, NULL, sizeof(*md));
-	if (md)
-		*md = 42;
+	if (!check_metadata(meta_have))
+		goto out;
 
 	test_pass = true;
 out:
@@ -447,12 +448,14 @@ int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
 
 /*
  * Check that skb_meta dynptr is read-only before prog writes to packet payload
- * using dynptr_write helper. Applies only to cloned skbs.
+ * using dynptr_write helper, and becomes read-write afterwards. Applies only to
+ * cloned skbs.
  */
 SEC("tc")
-int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
+int clone_dynptr_rdonly_before_data_dynptr_write_then_rw(struct __sk_buff *ctx)
 {
 	struct bpf_dynptr data, meta;
+	__u8 meta_have[META_SIZE];
 	const struct ethhdr *eth;
 
 	bpf_dynptr_from_skb(ctx, 0, &data);
@@ -465,15 +468,23 @@ int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
 
 	/* Expect read-only metadata before unclone */
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	if (!bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != META_SIZE)
+	if (!bpf_dynptr_is_rdonly(&meta))
+		goto out;
+
+	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
+	if (!check_metadata(meta_have))
 		goto out;
 
 	/* Helper write to payload will unclone the packet */
 	bpf_dynptr_write(&data, offsetof(struct ethhdr, h_proto), "x", 1, 0);
 
-	/* Expect no metadata after unclone */
+	/* Expect r/w metadata after unclone */
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
-	if (bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != 0)
+	if (bpf_dynptr_is_rdonly(&meta))
+		goto out;
+
+	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
+	if (!check_metadata(meta_have))
 		goto out;
 
 	test_pass = true;

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 12/15] selftests/bpf: Cover skb metadata access after vlan push/pop helper
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (10 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 13/15] selftests/bpf: Cover skb metadata access after bpf_skb_adjust_room Jakub Sitnicki
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Add a test to verify that skb metadata remains accessible after calling
bpf_skb_vlan_push() and bpf_skb_vlan_pop(), which modify the packet
headroom.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          |  6 +++
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 43 ++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index df6248dbaae8..e83b33526595 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -482,6 +482,12 @@ void test_xdp_context_tuntap(void)
 		test_tuntap_mirred(skel->progs.ing_xdp,
 				   skel->progs.clone_dynptr_rdonly_before_meta_dynptr_write,
 				   &skel->bss->test_pass);
+	/* Tests for BPF helpers which touch headroom */
+	if (test__start_subtest("helper_skb_vlan_push_pop"))
+		test_tuntap(skel->progs.ing_xdp,
+			    skel->progs.helper_skb_vlan_push_pop,
+			    NULL, /* tc prio 2 */
+			    &skel->bss->test_pass);
 
 	test_xdp_meta__destroy(skel);
 }
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index dba76f84c0c5..8d2b0512f8d3 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -49,6 +49,16 @@ static bool check_metadata(const char *file, int line, __u8 *meta_have)
 
 #define check_metadata(meta_have) check_metadata(__FILE__, __LINE__, meta_have)
 
+static bool check_skb_metadata(const char *file, int line, struct __sk_buff *skb)
+{
+	__u8 *data_meta = ctx_ptr(skb, data_meta);
+	__u8 *data = ctx_ptr(skb, data);
+
+	return data_meta + META_SIZE <= data && (check_metadata)(file, line, data_meta);
+}
+
+#define check_skb_metadata(skb) check_skb_metadata(__FILE__, __LINE__, skb)
+
 SEC("tc")
 int ing_cls(struct __sk_buff *ctx)
 {
@@ -525,4 +535,37 @@ int clone_dynptr_rdonly_before_meta_dynptr_write(struct __sk_buff *ctx)
 	return TC_ACT_SHOT;
 }
 
+SEC("tc")
+int helper_skb_vlan_push_pop(struct __sk_buff *ctx)
+{
+	int err;
+
+	/* bpf_skb_vlan_push assumes HW offload for primary VLAN tag. Only
+	 * secondary tag push triggers an actual MAC header modification.
+	 */
+	err = bpf_skb_vlan_push(ctx, 0, 42);
+	if (err)
+		goto out;
+	err = bpf_skb_vlan_push(ctx, 0, 207);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	err = bpf_skb_vlan_pop(ctx);
+	if (err)
+		goto out;
+	err = bpf_skb_vlan_pop(ctx);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	test_pass = true;
+out:
+	return TC_ACT_SHOT;
+}
+
 char _license[] SEC("license") = "GPL";

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 13/15] selftests/bpf: Cover skb metadata access after bpf_skb_adjust_room
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (11 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 12/15] selftests/bpf: Cover skb metadata access after vlan push/pop helper Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 14/15] selftests/bpf: Cover skb metadata access after change_head/tail helper Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 15/15] selftests/bpf: Cover skb metadata access after bpf_skb_change_proto Jakub Sitnicki
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Add a test to verify that skb metadata remains accessible after calling
bpf_skb_adjust_room(), which modifies the packet headroom and can trigger
head reallocation.

The helper expects an Ethernet frame carrying an IP packet so switch test
packet identification by source MAC address since we can no longer rely on
Ethernet proto being set to zero.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          | 25 ++++++---
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 61 ++++++++++++++++++----
 2 files changed, 71 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index e83b33526595..05d862e460b5 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -124,10 +124,10 @@ static int send_test_packet(int ifindex)
 	int n, sock = -1;
 	__u8 packet[sizeof(struct ethhdr) + TEST_PAYLOAD_LEN];
 
-	/* The ethernet header is not relevant for this test and doesn't need to
-	 * be meaningful.
-	 */
-	struct ethhdr eth = { 0 };
+	/* We use the Ethernet header only to identify the test packet */
+	struct ethhdr eth = {
+		.h_source = { 0x12, 0x34, 0xDE, 0xAD, 0xBE, 0xEF },
+	};
 
 	memcpy(packet, &eth, sizeof(eth));
 	memcpy(packet + sizeof(eth), test_payload, TEST_PAYLOAD_LEN);
@@ -160,8 +160,16 @@ static int write_test_packet(int tap_fd)
 	__u8 packet[sizeof(struct ethhdr) + TEST_PAYLOAD_LEN];
 	int n;
 
-	/* The ethernet header doesn't need to be valid for this test */
-	memset(packet, 0, sizeof(struct ethhdr));
+	/* The Ethernet header is mostly not relevant. We use it to identify the
+	 * test packet and some BPF helpers we exercise expect to operate on
+	 * Ethernet frames carrying IP packets. Pretend that's the case.
+	 */
+	struct ethhdr eth = {
+		.h_source = { 0x12, 0x34, 0xDE, 0xAD, 0xBE, 0xEF },
+		.h_proto = htons(ETH_P_IP),
+	};
+
+	memcpy(packet, &eth, sizeof(eth));
 	memcpy(packet + sizeof(struct ethhdr), test_payload, TEST_PAYLOAD_LEN);
 
 	n = write(tap_fd, packet, sizeof(packet));
@@ -488,6 +496,11 @@ void test_xdp_context_tuntap(void)
 			    skel->progs.helper_skb_vlan_push_pop,
 			    NULL, /* tc prio 2 */
 			    &skel->bss->test_pass);
+	if (test__start_subtest("helper_skb_adjust_room"))
+		test_tuntap(skel->progs.ing_xdp,
+			    skel->progs.helper_skb_adjust_room,
+			    NULL, /* tc prio 2 */
+			    &skel->bss->test_pass);
 
 	test_xdp_meta__destroy(skel);
 }
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index 8d2b0512f8d3..e29df7f82a89 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -25,6 +25,10 @@ enum {
 
 bool test_pass;
 
+static const __u8 smac_want[ETH_ALEN] = {
+	0x12, 0x34, 0xDE, 0xAD, 0xBE, 0xEF,
+};
+
 static const __u8 meta_want[META_SIZE] = {
 	0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
 	0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
@@ -32,6 +36,11 @@ static const __u8 meta_want[META_SIZE] = {
 	0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38,
 };
 
+static bool check_smac(const struct ethhdr *eth)
+{
+	return !__builtin_memcmp(eth->h_source, smac_want, ETH_ALEN);
+}
+
 static bool check_metadata(const char *file, int line, __u8 *meta_have)
 {
 	if (!__builtin_memcmp(meta_have, meta_want, META_SIZE))
@@ -286,7 +295,7 @@ int ing_xdp_zalloc_meta(struct xdp_md *ctx)
 	/* Drop any non-test packets */
 	if (eth + 1 > ctx_ptr(ctx, data_end))
 		return XDP_DROP;
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		return XDP_DROP;
 
 	ret = bpf_xdp_adjust_meta(ctx, -META_SIZE);
@@ -326,9 +335,9 @@ int ing_xdp(struct xdp_md *ctx)
 
 	/* The Linux networking stack may send other packets on the test
 	 * interface that interfere with the test. Just drop them.
-	 * The test packets can be recognized by their ethertype of zero.
+	 * The test packets can be recognized by their source MAC address.
 	 */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		return XDP_DROP;
 
 	__builtin_memcpy(data_meta, payload, META_SIZE);
@@ -348,7 +357,7 @@ int clone_data_meta_kept_on_data_write(struct __sk_buff *ctx)
 	if (eth + 1 > ctx_ptr(ctx, data_end))
 		goto out;
 	/* Ignore non-test packets */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		goto out;
 
 	if (meta_have + META_SIZE > eth)
@@ -378,7 +387,7 @@ int clone_data_meta_kept_on_meta_write(struct __sk_buff *ctx)
 	if (eth + 1 > ctx_ptr(ctx, data_end))
 		goto out;
 	/* Ignore non-test packets */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		goto out;
 
 	if (meta_have + META_SIZE > eth)
@@ -411,7 +420,7 @@ int clone_dynptr_kept_on_data_slice_write(struct __sk_buff *ctx)
 	if (!eth)
 		goto out;
 	/* Ignore non-test packets */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		goto out;
 
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
@@ -440,7 +449,7 @@ int clone_dynptr_kept_on_meta_slice_write(struct __sk_buff *ctx)
 	if (!eth)
 		goto out;
 	/* Ignore non-test packets */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		goto out;
 
 	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
@@ -473,7 +482,7 @@ int clone_dynptr_rdonly_before_data_dynptr_write_then_rw(struct __sk_buff *ctx)
 	if (!eth)
 		goto out;
 	/* Ignore non-test packets */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		goto out;
 
 	/* Expect read-only metadata before unclone */
@@ -517,7 +526,7 @@ int clone_dynptr_rdonly_before_meta_dynptr_write(struct __sk_buff *ctx)
 	if (!eth)
 		goto out;
 	/* Ignore non-test packets */
-	if (eth->h_proto != 0)
+	if (!check_smac(eth))
 		goto out;
 
 	/* Expect read-only metadata */
@@ -568,4 +577,38 @@ int helper_skb_vlan_push_pop(struct __sk_buff *ctx)
 	return TC_ACT_SHOT;
 }
 
+SEC("tc")
+int helper_skb_adjust_room(struct __sk_buff *ctx)
+{
+	int err;
+
+	/* Grow a 1 byte hole after the MAC header */
+	err = bpf_skb_adjust_room(ctx, 1, BPF_ADJ_ROOM_MAC, 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	/* Shrink a 1 byte hole after the MAC header */
+	err = bpf_skb_adjust_room(ctx, -1, BPF_ADJ_ROOM_MAC, 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	/* Grow a 256 byte hole to trigger head reallocation */
+	err = bpf_skb_adjust_room(ctx, 256, BPF_ADJ_ROOM_MAC, 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	test_pass = true;
+out:
+	return TC_ACT_SHOT;
+}
+
 char _license[] SEC("license") = "GPL";

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 14/15] selftests/bpf: Cover skb metadata access after change_head/tail helper
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (12 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 13/15] selftests/bpf: Cover skb metadata access after bpf_skb_adjust_room Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  2025-10-19 12:45 ` [PATCH bpf-next v2 15/15] selftests/bpf: Cover skb metadata access after bpf_skb_change_proto Jakub Sitnicki
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Add a test to verify that skb metadata remains accessible after calling
bpf_skb_change_head() and bpf_skb_change_tail(), which modify packet
headroom/tailroom and can trigger head reallocation.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          |  5 ++++
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 34 ++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index 05d862e460b5..8880feb84cbf 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -501,6 +501,11 @@ void test_xdp_context_tuntap(void)
 			    skel->progs.helper_skb_adjust_room,
 			    NULL, /* tc prio 2 */
 			    &skel->bss->test_pass);
+	if (test__start_subtest("helper_skb_change_head_tail"))
+		test_tuntap(skel->progs.ing_xdp,
+			    skel->progs.helper_skb_change_head_tail,
+			    NULL, /* tc prio 2 */
+			    &skel->bss->test_pass);
 
 	test_xdp_meta__destroy(skel);
 }
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index e29df7f82a89..30ad4b1d00d5 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -611,4 +611,38 @@ int helper_skb_adjust_room(struct __sk_buff *ctx)
 	return TC_ACT_SHOT;
 }
 
+SEC("tc")
+int helper_skb_change_head_tail(struct __sk_buff *ctx)
+{
+	int err;
+
+	/* Reserve 1 extra in the front for packet data */
+	err = bpf_skb_change_head(ctx, 1, 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	/* Reserve 256 extra bytes in the front to trigger head reallocation */
+	err = bpf_skb_change_head(ctx, 256, 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	/* Reserve 4k extra bytes in the back to trigger head reallocation */
+	err = bpf_skb_change_tail(ctx, ctx->len + 4096, 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	test_pass = true;
+out:
+	return TC_ACT_SHOT;
+}
+
 char _license[] SEC("license") = "GPL";

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH bpf-next v2 15/15] selftests/bpf: Cover skb metadata access after bpf_skb_change_proto
  2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
                   ` (13 preceding siblings ...)
  2025-10-19 12:45 ` [PATCH bpf-next v2 14/15] selftests/bpf: Cover skb metadata access after change_head/tail helper Jakub Sitnicki
@ 2025-10-19 12:45 ` Jakub Sitnicki
  14 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-19 12:45 UTC (permalink / raw)
  To: bpf
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

Add a test to verify that skb metadata remains accessible after calling
bpf_skb_change_proto(), which modifies packet headroom to accommodate
different IP header sizes.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
---
 .../bpf/prog_tests/xdp_context_test_run.c          |  5 +++++
 tools/testing/selftests/bpf/progs/test_xdp_meta.c  | 25 ++++++++++++++++++++++
 2 files changed, 30 insertions(+)

diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
index 8880feb84cbf..6272d0451d23 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
@@ -506,6 +506,11 @@ void test_xdp_context_tuntap(void)
 			    skel->progs.helper_skb_change_head_tail,
 			    NULL, /* tc prio 2 */
 			    &skel->bss->test_pass);
+	if (test__start_subtest("helper_skb_change_proto"))
+		test_tuntap(skel->progs.ing_xdp,
+			    skel->progs.helper_skb_change_proto,
+			    NULL, /* tc prio 2 */
+			    &skel->bss->test_pass);
 
 	test_xdp_meta__destroy(skel);
 }
diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
index 30ad4b1d00d5..6e4abac63e68 100644
--- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
+++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
@@ -4,6 +4,7 @@
 #include <linux/if_ether.h>
 #include <linux/pkt_cls.h>
 
+#include <bpf/bpf_endian.h>
 #include <bpf/bpf_helpers.h>
 #include "bpf_kfuncs.h"
 
@@ -645,4 +646,28 @@ int helper_skb_change_head_tail(struct __sk_buff *ctx)
 	return TC_ACT_SHOT;
 }
 
+SEC("tc")
+int helper_skb_change_proto(struct __sk_buff *ctx)
+{
+	int err;
+
+	err = bpf_skb_change_proto(ctx, bpf_htons(ETH_P_IPV6), 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	err = bpf_skb_change_proto(ctx, bpf_htons(ETH_P_IP), 0);
+	if (err)
+		goto out;
+
+	if (!check_skb_metadata(ctx))
+		goto out;
+
+	test_pass = true;
+out:
+	return TC_ACT_SHOT;
+}
+
 char _license[] SEC("license") = "GPL";

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata
  2025-10-19 12:45 ` [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata Jakub Sitnicki
@ 2025-10-22 23:12   ` Martin KaFai Lau
  2025-10-23 11:55     ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: Martin KaFai Lau @ 2025-10-22 23:12 UTC (permalink / raw)
  To: Jakub Sitnicki
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Alexei Starovoitov, Andrii Nakryiko, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Hao Luo, Jiri Olsa, Arthur Fabre, bpf,
	netdev, kernel-team



On 10/19/25 5:45 AM, Jakub Sitnicki wrote:
> @@ -447,12 +448,14 @@ int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
>   
>   /*
>    * Check that skb_meta dynptr is read-only before prog writes to packet payload
> - * using dynptr_write helper. Applies only to cloned skbs.
> + * using dynptr_write helper, and becomes read-write afterwards. Applies only to
> + * cloned skbs.
>    */
>   SEC("tc")
> -int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
> +int clone_dynptr_rdonly_before_data_dynptr_write_then_rw(struct __sk_buff *ctx)
>   {
>   	struct bpf_dynptr data, meta;
> +	__u8 meta_have[META_SIZE];
>   	const struct ethhdr *eth;
>   
>   	bpf_dynptr_from_skb(ctx, 0, &data);
> @@ -465,15 +468,23 @@ int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>   
>   	/* Expect read-only metadata before unclone */
>   	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
> -	if (!bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != META_SIZE)
> +	if (!bpf_dynptr_is_rdonly(&meta))

Can the bpf_dynptr_set_rdonly() be lifted from the 
bpf_dynptr_from_skb_meta()?

iiuc, the remaining thing left should be handling a cloned skb in 
__bpf_dynptr_write()? The __bpf_skb_store_bytes() is using 
bpf_try_make_writable, so maybe something similar can be done for the 
BPF_DYNPTR_TYPE_SKB_META?

> +		goto out;
> +
> +	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
> +	if (!check_metadata(meta_have))
>   		goto out;
>   
>   	/* Helper write to payload will unclone the packet */
>   	bpf_dynptr_write(&data, offsetof(struct ethhdr, h_proto), "x", 1, 0);
>   
> -	/* Expect no metadata after unclone */
> +	/* Expect r/w metadata after unclone */
>   	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
> -	if (bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != 0)
> +	if (bpf_dynptr_is_rdonly(&meta))

then it does not have to rely on the bpf_dynptr_write(&data, ...) above 
to make the metadata writable.

I have a high level question about the set. I assume the skb_data_move() 
in patch 2 will be useful in the future to preserve the metadata across 
the stack. Preserving the metadata across different tc progs (which this 
set does) is nice to have but it is not the end goal. Can you shed some 
light on the plan for building on top of this set?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure
  2025-10-19 12:45 ` [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure Jakub Sitnicki
@ 2025-10-22 23:30   ` Martin KaFai Lau
  2025-10-23 10:38     ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: Martin KaFai Lau @ 2025-10-22 23:30 UTC (permalink / raw)
  To: Jakub Sitnicki
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Alexei Starovoitov, Andrii Nakryiko, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Hao Luo, Jiri Olsa, Arthur Fabre, bpf,
	netdev, kernel-team



On 10/19/25 5:45 AM, Jakub Sitnicki wrote:
> diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
> index 93a1fbe6a4fd..a3de37942fa4 100644
> --- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
> +++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
> @@ -171,6 +171,25 @@ static int write_test_packet(int tap_fd)
>   	return 0;
>   }
>   
> +enum {
> +	BPF_STDOUT = 1,
> +	BPF_STDERR = 2,

There is BPF_STREAM_STDERR in uapi/bpf.h
> diff --git a/tools/testing/selftests/bpf/progs/test_xdp_meta.c b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
> index 11288b20f56c..33480bcb8ec1 100644
> --- a/tools/testing/selftests/bpf/progs/test_xdp_meta.c
> +++ b/tools/testing/selftests/bpf/progs/test_xdp_meta.c
> @@ -18,6 +18,11 @@
>    * TC program then verifies if the passed metadata is correct.
>    */
>   
> +enum {
> +	BPF_STDOUT = 1,
> +	BPF_STDERR = 2,
> +};
> +

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure
  2025-10-22 23:30   ` Martin KaFai Lau
@ 2025-10-23 10:38     ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-23 10:38 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Alexei Starovoitov, Andrii Nakryiko, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Hao Luo, Jiri Olsa, Arthur Fabre, bpf,
	netdev, kernel-team

On Wed, Oct 22, 2025 at 04:30 PM -07, Martin KaFai Lau wrote:
> On 10/19/25 5:45 AM, Jakub Sitnicki wrote:
>> diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
>> index 93a1fbe6a4fd..a3de37942fa4 100644
>> --- a/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
>> +++ b/tools/testing/selftests/bpf/prog_tests/xdp_context_test_run.c
>> @@ -171,6 +171,25 @@ static int write_test_packet(int tap_fd)
>>   	return 0;
>>   }
>>   +enum {
>> +	BPF_STDOUT = 1,
>> +	BPF_STDERR = 2,
>
> There is BPF_STREAM_STDERR in uapi/bpf.h

How did I miss that? Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata
  2025-10-22 23:12   ` Martin KaFai Lau
@ 2025-10-23 11:55     ` Jakub Sitnicki
  2025-10-24  2:32       ` Martin KaFai Lau
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-23 11:55 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Alexei Starovoitov, Andrii Nakryiko, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Hao Luo, Jiri Olsa, Arthur Fabre, bpf,
	netdev, kernel-team

On Wed, Oct 22, 2025 at 04:12 PM -07, Martin KaFai Lau wrote:
> On 10/19/25 5:45 AM, Jakub Sitnicki wrote:
>> @@ -447,12 +448,14 @@ int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
>>     /*
>>    * Check that skb_meta dynptr is read-only before prog writes to packet payload
>> - * using dynptr_write helper. Applies only to cloned skbs.
>> + * using dynptr_write helper, and becomes read-write afterwards. Applies only to
>> + * cloned skbs.
>>    */
>>   SEC("tc")
>> -int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>> +int clone_dynptr_rdonly_before_data_dynptr_write_then_rw(struct __sk_buff *ctx)
>>   {
>>   	struct bpf_dynptr data, meta;
>> +	__u8 meta_have[META_SIZE];
>>   	const struct ethhdr *eth;
>>     	bpf_dynptr_from_skb(ctx, 0, &data);
>> @@ -465,15 +468,23 @@ int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>>     	/* Expect read-only metadata before unclone */
>>   	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
>> -	if (!bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != META_SIZE)
>> +	if (!bpf_dynptr_is_rdonly(&meta))
>
> Can the bpf_dynptr_set_rdonly() be lifted from the bpf_dynptr_from_skb_meta()?
>
> iiuc, the remaining thing left should be handling a cloned skb in
> __bpf_dynptr_write()? The __bpf_skb_store_bytes() is using
> bpf_try_make_writable, so maybe something similar can be done for the
> BPF_DYNPTR_TYPE_SKB_META?

I'm with you. This is not user-friendly at all currently.

This patch set has already gotten quite long so how about I split out
the pskb_expand_head patch (#1) and the related selftest change (patch
#11) from this series, expand it to lift bpf_dynptr_set_rdonly()
limitation for skb_meta dynptr, and do that first in a dedicated series?

>
>> +		goto out;
>> +
>> +	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
>> +	if (!check_metadata(meta_have))
>>   		goto out;
>>     	/* Helper write to payload will unclone the packet */
>>   	bpf_dynptr_write(&data, offsetof(struct ethhdr, h_proto), "x", 1, 0);
>>   -	/* Expect no metadata after unclone */
>> +	/* Expect r/w metadata after unclone */
>>   	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
>> -	if (bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != 0)
>> +	if (bpf_dynptr_is_rdonly(&meta))
>
> then it does not have to rely on the bpf_dynptr_write(&data, ...) above to make
> the metadata writable.
>
> I have a high level question about the set. I assume the skb_data_move() in
> patch 2 will be useful in the future to preserve the metadata across the
> stack. Preserving the metadata across different tc progs (which this set does)
> is nice to have but it is not the end goal. Can you shed some light on the plan
> for building on top of this set?

Right. Starting at the highest level, I want to work toward preserving
the metadata on RX path first (ongoing), forward path next, and TX path
last.

On RX path, the end game is for sk_filter prog to be able to access
metadata thru dynptr. For that we need to know where the metadata
resides. I see two ways how we can tackle that:

A) We keep relying on metadata being in front of skb_mac_header().

   Fun fact - if you don't call any TC BPF helpers that touch
   skb->mac_header and don't have any tunnel or tagging devices on RX
   path, this works out of the box today. But we need to make sure that
   any call site that changes the MAC header offset, moves the
   metadata. I expect this approach will be a pain on TX path.

... or ...

B) We track the metadata offset separately from MAC header offset

   This requires additional state, we need to store the metadata offset
   somewhere. However, in exchange for a couple bytes we gain some
   benefits:

   1. We don't need to move the metadata after skb_pull.

   2. We only need to move the metadata for skb_push if there's not
     enough space left, that is the gap between skb->data and where
     metadata ends is too small.

     (This means that anyone who is not using skb->data_meta on RX path
     but the skb_meta dynptr instead, can avoid any memmove's of the
     metadata itself.)
     
   3. We can place the metadata at skb->head, which plays nicely with TX
      path, where we need the headroom for pushing headers.

I've been trying out how (B) plays out when safe-proofing the tunnel &
tagging devices, your VLANs and GREs, to preserve the metadata.

To that end I've added a new u16 field in skb_shinfo to track
meta_end. There a 4B hole there currently and we load the whole
cacheline from skb_shinf to access meta_len anyway.

Once I had that, I could modify the skb_data_move() to relocate the
metadata only if necessary, which looks like so:

static inline void skb_data_move(struct sk_buff *skb, const int len,
				 const unsigned int n)
{
	const u8 meta_len = skb_metadata_len(skb);
	u8 *meta, *meta_end;

	if (!len || (!n && !meta_len))
		return;

	if (!meta_len)
		goto no_metadata;

	/* Not enough headroom left for metadata. Drop it. */
	if (WARN_ON_ONCE(meta_len > skb_headroom(skb))) {
		skb_metadata_clear(skb);
		goto no_metadata;
	}

	meta_end = skb_metadata_end(skb);
	meta = meta_end - meta_len;

	/* Metadata in front of data before push/pull. Keep it that way. */
	if (meta_end == skb->data - len) {
		memmove(meta + len, meta, meta_len + n);
		skb_shinfo(skb)->meta_end += len;
		return;
	}

	if (len < 0) {
		/* Data pushed. Move metadata to the top. */
		memmove(skb->head, meta, meta_len);
		skb_shinfo(skb)->meta_end = meta_len;
	}
no_metadata:
	memmove(skb->data, skb->data - len, n);
}

The goal is for RX path is to hit everwhere just the last memmove(),
since we will be usually pulling from skb->data, if you're not using the
skb->data_meta pseudo-pointer in your TC(X) BPF programs.

There are some verifier changes needed to keep skb->data_meta
working. We need to move the metadata back in front of the MAC header
before a TC(X) prog that uses skb->data_meta runs, or things break.

Early code for that is also available for a preview. I've pushed it to:

https://github.com/jsitnicki/linux/commits/skb-meta/safeproof-netdevs/

Thanks,
-jkbs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head
  2025-10-19 12:45 ` [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head Jakub Sitnicki
@ 2025-10-24  0:51   ` Jakub Kicinski
  2025-10-24 12:17     ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: Jakub Kicinski @ 2025-10-24  0:51 UTC (permalink / raw)
  To: Jakub Sitnicki
  Cc: bpf, David S. Miller, Eric Dumazet, Paolo Abeni, Simon Horman,
	Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

On Sun, 19 Oct 2025 14:45:25 +0200 Jakub Sitnicki wrote:
> pskb_expand_head() copies headroom, including skb metadata, into the newly
> allocated head, but then clears the metadata. As a result, metadata is lost
> when BPF helpers trigger an skb head reallocation.

True, then again if someone is reallocating headroom they may very well
push a header after, shifting metadata into an uninitialized part of
the headroom. Not sure we can do much about that, but perhaps worth
being more explicit in the commit msg?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata
  2025-10-23 11:55     ` Jakub Sitnicki
@ 2025-10-24  2:32       ` Martin KaFai Lau
  2025-10-24 15:40         ` Jakub Sitnicki
  0 siblings, 1 reply; 24+ messages in thread
From: Martin KaFai Lau @ 2025-10-24  2:32 UTC (permalink / raw)
  To: Jakub Sitnicki
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Alexei Starovoitov, Andrii Nakryiko, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Hao Luo, Jiri Olsa, Arthur Fabre, bpf,
	netdev, kernel-team



On 10/23/25 4:55 AM, Jakub Sitnicki wrote:
> On Wed, Oct 22, 2025 at 04:12 PM -07, Martin KaFai Lau wrote:
>> On 10/19/25 5:45 AM, Jakub Sitnicki wrote:
>>> @@ -447,12 +448,14 @@ int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
>>>      /*
>>>     * Check that skb_meta dynptr is read-only before prog writes to packet payload
>>> - * using dynptr_write helper. Applies only to cloned skbs.
>>> + * using dynptr_write helper, and becomes read-write afterwards. Applies only to
>>> + * cloned skbs.
>>>     */
>>>    SEC("tc")
>>> -int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>>> +int clone_dynptr_rdonly_before_data_dynptr_write_then_rw(struct __sk_buff *ctx)
>>>    {
>>>    	struct bpf_dynptr data, meta;
>>> +	__u8 meta_have[META_SIZE];
>>>    	const struct ethhdr *eth;
>>>      	bpf_dynptr_from_skb(ctx, 0, &data);
>>> @@ -465,15 +468,23 @@ int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>>>      	/* Expect read-only metadata before unclone */
>>>    	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
>>> -	if (!bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != META_SIZE)
>>> +	if (!bpf_dynptr_is_rdonly(&meta))
>>
>> Can the bpf_dynptr_set_rdonly() be lifted from the bpf_dynptr_from_skb_meta()?
>>
>> iiuc, the remaining thing left should be handling a cloned skb in
>> __bpf_dynptr_write()? The __bpf_skb_store_bytes() is using
>> bpf_try_make_writable, so maybe something similar can be done for the
>> BPF_DYNPTR_TYPE_SKB_META?
> 
> I'm with you. This is not user-friendly at all currently.
> 
> This patch set has already gotten quite long so how about I split out
> the pskb_expand_head patch (#1) and the related selftest change (patch
> #11) from this series, expand it to lift bpf_dynptr_set_rdonly()
> limitation for skb_meta dynptr, and do that first in a dedicated series?


A followup on lifting the bpf_dynptr_set_rdonly is fine and keep this 
set as is. Just want to check if there is anything stopping it. However, 
imo, having one or two patches over is fine. The set is not difficult to 
follow.


> 
>>
>>> +		goto out;
>>> +
>>> +	bpf_dynptr_read(meta_have, META_SIZE, &meta, 0, 0);
>>> +	if (!check_metadata(meta_have))
>>>    		goto out;
>>>      	/* Helper write to payload will unclone the packet */
>>>    	bpf_dynptr_write(&data, offsetof(struct ethhdr, h_proto), "x", 1, 0);
>>>    -	/* Expect no metadata after unclone */
>>> +	/* Expect r/w metadata after unclone */
>>>    	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
>>> -	if (bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != 0)
>>> +	if (bpf_dynptr_is_rdonly(&meta))
>>
>> then it does not have to rely on the bpf_dynptr_write(&data, ...) above to make
>> the metadata writable.
>>
>> I have a high level question about the set. I assume the skb_data_move() in
>> patch 2 will be useful in the future to preserve the metadata across the
>> stack. Preserving the metadata across different tc progs (which this set does)
>> is nice to have but it is not the end goal. Can you shed some light on the plan
>> for building on top of this set?
> Right. Starting at the highest level, I want to work toward preserving
> the metadata on RX path first (ongoing), forward path next, and TX path
> last.
> 
> On RX path, the end game is for sk_filter prog to be able to access
> metadata thru dynptr. For that we need to know where the metadata
> resides. I see two ways how we can tackle that:
> 
> A) We keep relying on metadata being in front of skb_mac_header().
> 
>     Fun fact - if you don't call any TC BPF helpers that touch
>     skb->mac_header and don't have any tunnel or tagging devices on RX
>     path, this works out of the box today. But we need to make sure that
>     any call site that changes the MAC header offset, moves the
>     metadata. I expect this approach will be a pain on TX path.
> 
> ... or ...
> 
> B) We track the metadata offset separately from MAC header offset
> 
>     This requires additional state, we need to store the metadata offset
>     somewhere. However, in exchange for a couple bytes we gain some
>     benefits:
> 
>     1. We don't need to move the metadata after skb_pull.
> 
>     2. We only need to move the metadata for skb_push if there's not
>       enough space left, that is the gap between skb->data and where
>       metadata ends is too small.
> 
>       (This means that anyone who is not using skb->data_meta on RX path
>       but the skb_meta dynptr instead, can avoid any memmove's of the
>       metadata itself.)


I don't think I get this part. For example, 
bpf_dynptr_slice_rdwr(&meta_dynptr) should be treated like
skb->data_meta also?


>       
>     3. We can place the metadata at skb->head, which plays nicely with TX
>        path, where we need the headroom for pushing headers.


Having a way to separately track the metadata start/end is useful.
An unrelated dumb/lazy question, is it possible/lot-of-changes to put 
the metadata in the head (or after xdp_frame?) in the RX path?

> 
> I've been trying out how (B) plays out when safe-proofing the tunnel &
> tagging devices, your VLANs and GREs, to preserve the metadata.
> 
> To that end I've added a new u16 field in skb_shinfo to track
> meta_end. There a 4B hole there currently and we load the whole
> cacheline from skb_shinf to access meta_len anyway.
> 
> Once I had that, I could modify the skb_data_move() to relocate the
> metadata only if necessary, which looks like so:
> 
> static inline void skb_data_move(struct sk_buff *skb, const int len,
> 				 const unsigned int n)
> {
> 	const u8 meta_len = skb_metadata_len(skb);
> 	u8 *meta, *meta_end;
> 
> 	if (!len || (!n && !meta_len))
> 		return;
> 
> 	if (!meta_len)
> 		goto no_metadata;
> 
> 	/* Not enough headroom left for metadata. Drop it. */
> 	if (WARN_ON_ONCE(meta_len > skb_headroom(skb))) {
> 		skb_metadata_clear(skb);
> 		goto no_metadata;
> 	}
> 
> 	meta_end = skb_metadata_end(skb);
> 	meta = meta_end - meta_len;
> 
> 	/* Metadata in front of data before push/pull. Keep it that way. */
> 	if (meta_end == skb->data - len) {
> 		memmove(meta + len, meta, meta_len + n);
> 		skb_shinfo(skb)->meta_end += len;
> 		return;
> 	}
> 
> 	if (len < 0) {
> 		/* Data pushed. Move metadata to the top. */
> 		memmove(skb->head, meta, meta_len);
> 		skb_shinfo(skb)->meta_end = meta_len;
> 	}
> no_metadata:
> 	memmove(skb->data, skb->data - len, n);
> }
> 
> The goal is for RX path is to hit everwhere just the last memmove(),
> since we will be usually pulling from skb->data, if you're not using the
> skb->data_meta pseudo-pointer in your TC(X) BPF programs.
> 
> There are some verifier changes needed to keep skb->data_meta
> working. We need to move the metadata back in front of the MAC header
> before a TC(X) prog that uses skb->data_meta runs, or things break.
> 
> Early code for that is also available for a preview. I've pushed it to:
> 
> https://github.com/jsitnicki/linux/commits/skb-meta/safeproof-netdevs/

Thanks. I will take a look.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head
  2025-10-24  0:51   ` Jakub Kicinski
@ 2025-10-24 12:17     ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-24 12:17 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, David S. Miller, Eric Dumazet, Paolo Abeni, Simon Horman,
	Martin KaFai Lau, Daniel Borkmann, John Fastabend,
	Stanislav Fomichev, Alexei Starovoitov, Andrii Nakryiko,
	Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Hao Luo,
	Jiri Olsa, Arthur Fabre, netdev, kernel-team

On Thu, Oct 23, 2025 at 05:51 PM -07, Jakub Kicinski wrote:
> On Sun, 19 Oct 2025 14:45:25 +0200 Jakub Sitnicki wrote:
>> pskb_expand_head() copies headroom, including skb metadata, into the newly
>> allocated head, but then clears the metadata. As a result, metadata is lost
>> when BPF helpers trigger an skb head reallocation.
>
> True, then again if someone is reallocating headroom they may very well
> push a header after, shifting metadata into an uninitialized part of
> the headroom. Not sure we can do much about that, but perhaps worth
> being more explicit in the commit msg?

This is where the skb_data_move() helper, proposed by the next patch,
comes in. We will try to move the metadata out of the way if possible,
and clear it if don't have enough headroom left. That approach relies on
all pskb_expand_head users adopting the helper, which is no simple task.

I can add guidance for pskb_expand_head users to the commit description,
or maybe better yet, add a note in pskb_expand_head docs.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata
  2025-10-24  2:32       ` Martin KaFai Lau
@ 2025-10-24 15:40         ` Jakub Sitnicki
  0 siblings, 0 replies; 24+ messages in thread
From: Jakub Sitnicki @ 2025-10-24 15:40 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Simon Horman, Daniel Borkmann, John Fastabend, Stanislav Fomichev,
	Alexei Starovoitov, Andrii Nakryiko, Eduard Zingerman, Song Liu,
	Yonghong Song, KP Singh, Hao Luo, Jiri Olsa, Arthur Fabre, bpf,
	netdev, kernel-team, Jesper Dangaard Brouer

On Thu, Oct 23, 2025 at 07:32 PM -07, Martin KaFai Lau wrote:
> On 10/23/25 4:55 AM, Jakub Sitnicki wrote:
>> On Wed, Oct 22, 2025 at 04:12 PM -07, Martin KaFai Lau wrote:
>>> On 10/19/25 5:45 AM, Jakub Sitnicki wrote:
>>>> @@ -447,12 +448,14 @@ int clone_dynptr_empty_on_meta_slice_write(struct __sk_buff *ctx)
>>>>      /*
>>>>     * Check that skb_meta dynptr is read-only before prog writes to packet payload
>>>> - * using dynptr_write helper. Applies only to cloned skbs.
>>>> + * using dynptr_write helper, and becomes read-write afterwards. Applies only to
>>>> + * cloned skbs.
>>>>     */
>>>>    SEC("tc")
>>>> -int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>>>> +int clone_dynptr_rdonly_before_data_dynptr_write_then_rw(struct __sk_buff *ctx)
>>>>    {
>>>>    	struct bpf_dynptr data, meta;
>>>> +	__u8 meta_have[META_SIZE];
>>>>    	const struct ethhdr *eth;
>>>>      	bpf_dynptr_from_skb(ctx, 0, &data);
>>>> @@ -465,15 +468,23 @@ int clone_dynptr_rdonly_before_data_dynptr_write(struct __sk_buff *ctx)
>>>>      	/* Expect read-only metadata before unclone */
>>>>    	bpf_dynptr_from_skb_meta(ctx, 0, &meta);
>>>> -	if (!bpf_dynptr_is_rdonly(&meta) || bpf_dynptr_size(&meta) != META_SIZE)
>>>> +	if (!bpf_dynptr_is_rdonly(&meta))
>>>
>>> Can the bpf_dynptr_set_rdonly() be lifted from the bpf_dynptr_from_skb_meta()?
>>>
>>> iiuc, the remaining thing left should be handling a cloned skb in
>>> __bpf_dynptr_write()? The __bpf_skb_store_bytes() is using
>>> bpf_try_make_writable, so maybe something similar can be done for the
>>> BPF_DYNPTR_TYPE_SKB_META?
>> I'm with you. This is not user-friendly at all currently.
>> This patch set has already gotten quite long so how about I split out
>> the pskb_expand_head patch (#1) and the related selftest change (patch
>> #11) from this series, expand it to lift bpf_dynptr_set_rdonly()
>> limitation for skb_meta dynptr, and do that first in a dedicated series?
>
> A followup on lifting the bpf_dynptr_set_rdonly is fine and keep this set as
> is. Just want to check if there is anything stopping it. However, imo, having
> one or two patches over is fine. The set is not difficult to follow.

All right. I will pile that one on. 16 makes it a nice even number.

[...]

>>> I have a high level question about the set. I assume the skb_data_move() in
>>> patch 2 will be useful in the future to preserve the metadata across the
>>> stack. Preserving the metadata across different tc progs (which this set does)
>>> is nice to have but it is not the end goal. Can you shed some light on the plan
>>> for building on top of this set?
>> Right. Starting at the highest level, I want to work toward preserving
>> the metadata on RX path first (ongoing), forward path next, and TX path
>> last.
>> On RX path, the end game is for sk_filter prog to be able to access
>> metadata thru dynptr. For that we need to know where the metadata
>> resides. I see two ways how we can tackle that:
>> A) We keep relying on metadata being in front of skb_mac_header().
>>     Fun fact - if you don't call any TC BPF helpers that touch
>>     skb->mac_header and don't have any tunnel or tagging devices on RX
>>     path, this works out of the box today. But we need to make sure that
>>     any call site that changes the MAC header offset, moves the
>>     metadata. I expect this approach will be a pain on TX path.
>> ... or ...
>> B) We track the metadata offset separately from MAC header offset
>>     This requires additional state, we need to store the metadata offset
>>     somewhere. However, in exchange for a couple bytes we gain some
>>     benefits:
>>     1. We don't need to move the metadata after skb_pull.
>>     2. We only need to move the metadata for skb_push if there's not
>>       enough space left, that is the gap between skb->data and where
>>       metadata ends is too small.
>>       (This means that anyone who is not using skb->data_meta on RX path
>>       but the skb_meta dynptr instead, can avoid any memmove's of the
>>       metadata itself.)
>
>
> I don't think I get this part. For example, bpf_dynptr_slice_rdwr(&meta_dynptr)
> should be treated like
> skb->data_meta also?

That's the thing. With dynptr we don't care where the metadata is
located. Hence, no need to move it before the prog runs, even if there
is a gap between the metadata and the MAC header, say, because of GRE
decap. If we track metadata separately the skb_metadata_end() could
become:

static inline void *skb_metadata_end(const struct sk_buff *skb)
{
	return skb->head + skb_shinfo(skb)->meta_end;
}

[..]

> Having a way to separately track the metadata start/end is useful.
> An unrelated dumb/lazy question, is it possible/lot-of-changes to put the
> metadata in the head (or after xdp_frame?) in the RX path?

We've been asking ourselves the same theoretical question. There are
at least a couple challenges to retrofit such change:

1. You'd need a way to track where the metadata ends in XDP. As I hear
   from Jesper, XDP metadata was intentionally placed right in front of
   the packet to avoid computing/loading another pointer.

2. You'd be moving the contents when growing the metadata with
   bpf_xdp_adjust_meta. Or you'd need to add a way to resize it by
   moving the end.

Not really something we've considered attacking ATM.

My gut feeling is that it will do us good to leave the metadata close to
the MAC header during the initial adoption phase to catch all the call
sites that push headers without moving the metadata and need fixing.

[...]

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2025-10-24 15:40 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-19 12:45 [PATCH bpf-next v2 00/15] Make TC BPF helpers preserve skb metadata Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 01/15] net: Preserve metadata on pskb_expand_head Jakub Sitnicki
2025-10-24  0:51   ` Jakub Kicinski
2025-10-24 12:17     ` Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 02/15] net: Helper to move packet data and metadata after skb_push/pull Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 03/15] vlan: Make vlan_remove_tag return nothing Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 04/15] bpf: Make bpf_skb_vlan_pop helper metadata-safe Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 05/15] bpf: Make bpf_skb_vlan_push " Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 06/15] bpf: Make bpf_skb_adjust_room metadata-safe Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 07/15] bpf: Make bpf_skb_change_proto helper metadata-safe Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 08/15] bpf: Make bpf_skb_change_head " Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 09/15] selftests/bpf: Verify skb metadata in BPF instead of userspace Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 10/15] selftests/bpf: Dump skb metadata on verification failure Jakub Sitnicki
2025-10-22 23:30   ` Martin KaFai Lau
2025-10-23 10:38     ` Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 11/15] selftests/bpf: Expect unclone to preserve skb metadata Jakub Sitnicki
2025-10-22 23:12   ` Martin KaFai Lau
2025-10-23 11:55     ` Jakub Sitnicki
2025-10-24  2:32       ` Martin KaFai Lau
2025-10-24 15:40         ` Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 12/15] selftests/bpf: Cover skb metadata access after vlan push/pop helper Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 13/15] selftests/bpf: Cover skb metadata access after bpf_skb_adjust_room Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 14/15] selftests/bpf: Cover skb metadata access after change_head/tail helper Jakub Sitnicki
2025-10-19 12:45 ` [PATCH bpf-next v2 15/15] selftests/bpf: Cover skb metadata access after bpf_skb_change_proto Jakub Sitnicki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).