netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently
@ 2025-02-12  6:18 Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 01/12] bpf: add networking timestamping support to bpf_get/setsockopt() Jason Xing
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

"Timestamping is key to debugging network stack latency. With
SO_TIMESTAMPING, bugs that are otherwise incorrectly assumed to be
network issues can be attributed to the kernel." This is extracted
from the talk "SO_TIMESTAMPING: Powering Fleetwide RPC Monitoring"
addressed by Willem de Bruijn at netdevconf 0x17).

There are a few areas that need optimization with the consideration of
easier use and less performance impact, which I highlighted and mainly
discussed at netconf 2024 with Willem de Bruijn and John Fastabend:
uAPI compatibility, extra system call overhead, and the need for
application modification. I initially managed to solve these issues
by writing a kernel module that hooks various key functions. However,
this approach is not suitable for the next kernel release. Therefore,
a BPF extension was proposed. During recent period, Martin KaFai Lau
provides invaluable suggestions about BPF along the way. Many thanks
here!

This series adds the BPF networking timestamping infrastructure through
reusing most of the tx timestamping callback that is currently enabled
by the SO_TIMESTAMPING.. This series also adds TX timestamping support
for TCP. The RX timestamping and UDP support will be added in the future.

---
v9
Link: https://lore.kernel.org/all/20250208103220.72294-1-kerneljasonxing@gmail.com/
1. set the hwtstamp to skb when the skb enters into the hw SND case
2. fix co-existence problem in patch 9 and add corresponding check in
patch 12.
3. refine some commit messages and titles

v8
Link: https://lore.kernel.org/all/20250128084620.57547-1-kerneljasonxing@gmail.com/
1. adjust some commit messages and titles
2. add sk cookie in selftests
3. handle the NULL pointer in hwstamp
4. use kfunc to do selective sampling

v7
Link: https://lore.kernel.org/all/20250121012901.87763-1-kerneljasonxing@gmail.com/
1. target bpf-next tree
2. simplely and directly stop timestamping callbacks calling a few BPF
CALLS due to safety concern.
3. add more new testcases and adjust the existing testcases
4. revise some comments of new timestamping callbacks
5. remove a few BPF CGROUP locks

RFC v6
In the meantime, any suggestions and reviews are welcome!
Link: https://lore.kernel.org/all/20250112113748.73504-1-kerneljasonxing@gmail.com/
1. handle those safety problem by using the correct method.
2. support bpf_getsockopt.
3. adjust the position of BPF_SOCK_OPS_TS_TCP_SND_CB
4. fix mishandling the hardware timestamp error
5. add more corresponding tests

v5
Link: https://lore.kernel.org/all/20241207173803.90744-1-kerneljasonxing@gmail.com/
1. handle the safety issus when someone tries to call unrelated bpf
helpers.
2. avoid adding direct function call in the hot path like
__dev_queue_xmit()
3. remove reporting the hardware timestamp and tskey since they can be
fetched through the existing helper with the help of
bpf_skops_init_skb(), please see the selftest.
4. add new sendmsg callback in tcp_sendmsg, and introduce tskey_bpf used
by bpf program to correlate tcp_sendmsg with other hook points in patch [13/15].

v4
Link: https://lore.kernel.org/all/20241028110535.82999-1-kerneljasonxing@gmail.com/
1. introduce sk->sk_bpf_cb_flags to let user use bpf_setsockopt() (Martin)
2. introduce SKBTX_BPF to enable the bpf SO_TIMESTAMPING feature (Martin)
3. introduce bpf map in tests (Martin)
4. I choose to make this series as simple as possible, so I only support
most cases in the tx path for TCP protocol.

v3
Link: https://lore.kernel.org/all/20241012040651.95616-1-kerneljasonxing@gmail.com/
1. support UDP proto by introducing a new generation point.
2. for OPT_ID, introducing sk_tskey_bpf_offset to compute the delta
between the current socket key and bpf socket key. It is desiged for
UDP, which also applies to TCP.
3. support bpf_getsockopt()
4. use cgroup static key instead.
5. add one simple bpf selftest to show how it can be used.
6. remove the rx support from v2 because the number of patches could
exceed the limit of one series.

V2
Link: https://lore.kernel.org/all/20241008095109.99918-1-kerneljasonxing@gmail.com/
1. Introduce tsflag requestors so that we are able to extend more in the
future. Besides, it enables TX flags for bpf extension feature separately
without breaking users. It is suggested by Vadim Fedorenko.
2. introduce a static key to control the whole feature. (Willem)
3. Open the gate of bpf_setsockopt for the SO_TIMESTAMPING feature in
some TX/RX cases, not all the cases.


Jason Xing (12):
  bpf: add networking timestamping support to bpf_get/setsockopt()
  bpf: prepare the sock_ops ctx and call bpf prog for TX timestamping
  bpf: prevent unsafe access to the sock fields in the BPF timestamping
    callback
  bpf: disable unsafe helpers in TX timestamping callbacks
  net-timestamp: prepare for isolating two modes of SO_TIMESTAMPING
  bpf: add BPF_SOCK_OPS_TS_SCHED_OPT_CB callback
  bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback
  bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback
  bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback
  bpf: add BPF_SOCK_OPS_TS_SND_CB callback
  bpf: support selective sampling for bpf timestamping
  selftests/bpf: add simple bpf tests in the tx path for timestamping
    feature

 include/linux/filter.h                        |   1 +
 include/linux/skbuff.h                        |  12 +-
 include/net/sock.h                            |  10 +
 include/net/tcp.h                             |   7 +-
 include/uapi/linux/bpf.h                      |  30 +++
 kernel/bpf/btf.c                              |   1 +
 net/core/dev.c                                |   3 +-
 net/core/filter.c                             |  80 +++++-
 net/core/skbuff.c                             |  50 ++++
 net/core/sock.c                               |  14 +
 net/dsa/user.c                                |   2 +-
 net/ipv4/tcp.c                                |   6 +-
 net/ipv4/tcp_input.c                          |   2 +
 net/ipv4/tcp_output.c                         |   2 +
 net/socket.c                                  |   2 +-
 tools/include/uapi/linux/bpf.h                |  23 ++
 .../bpf/prog_tests/net_timestamping.c         | 231 +++++++++++++++++
 .../selftests/bpf/progs/net_timestamping.c    | 244 ++++++++++++++++++
 18 files changed, 706 insertions(+), 14 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/net_timestamping.c
 create mode 100644 tools/testing/selftests/bpf/progs/net_timestamping.c

-- 
2.43.5


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 01/12] bpf: add networking timestamping support to bpf_get/setsockopt()
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 02/12] bpf: prepare the sock_ops ctx and call bpf prog for TX timestamping Jason Xing
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

The new SK_BPF_CB_FLAGS and new SK_BPF_CB_TX_TIMESTAMPING are
added to bpf_get/setsockopt. The later patches will implement the
BPF networking timestamping. The BPF program will use
bpf_setsockopt(SK_BPF_CB_FLAGS, SK_BPF_CB_TX_TIMESTAMPING) to
enable the BPF networking timestamping on a socket.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/net/sock.h             |  3 +++
 include/uapi/linux/bpf.h       |  8 ++++++++
 net/core/filter.c              | 23 +++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h |  1 +
 4 files changed, 35 insertions(+)

diff --git a/include/net/sock.h b/include/net/sock.h
index 8036b3b79cd8..7916982343c6 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -303,6 +303,7 @@ struct sk_filter;
   *	@sk_stamp: time stamp of last packet received
   *	@sk_stamp_seq: lock for accessing sk_stamp on 32 bit architectures only
   *	@sk_tsflags: SO_TIMESTAMPING flags
+  *	@sk_bpf_cb_flags: used in bpf_setsockopt()
   *	@sk_use_task_frag: allow sk_page_frag() to use current->task_frag.
   *			   Sockets that can be used under memory reclaim should
   *			   set this to false.
@@ -445,6 +446,8 @@ struct sock {
 	u32			sk_reserved_mem;
 	int			sk_forward_alloc;
 	u32			sk_tsflags;
+#define SK_BPF_CB_FLAG_TEST(SK, FLAG) ((SK)->sk_bpf_cb_flags & (FLAG))
+	u32			sk_bpf_cb_flags;
 	__cacheline_group_end(sock_write_rxtx);
 
 	__cacheline_group_begin(sock_write_tx);
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index fff6cdb8d11a..fa666d51dffe 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -6916,6 +6916,13 @@ enum {
 	BPF_SOCK_OPS_ALL_CB_FLAGS       = 0x7F,
 };
 
+/* Definitions for bpf_sk_cb_flags */
+enum {
+	SK_BPF_CB_TX_TIMESTAMPING	= 1<<0,
+	SK_BPF_CB_MASK			= (SK_BPF_CB_TX_TIMESTAMPING - 1) |
+					   SK_BPF_CB_TX_TIMESTAMPING
+};
+
 /* List of known BPF sock_ops operators.
  * New entries can only be added at the end
  */
@@ -7094,6 +7101,7 @@ enum {
 	TCP_BPF_SYN_IP		= 1006, /* Copy the IP[46] and TCP header */
 	TCP_BPF_SYN_MAC         = 1007, /* Copy the MAC, IP[46], and TCP header */
 	TCP_BPF_SOCK_OPS_CB_FLAGS = 1008, /* Get or Set TCP sock ops flags */
+	SK_BPF_CB_FLAGS		= 1009, /* Used to set socket bpf flags */
 };
 
 enum {
diff --git a/net/core/filter.c b/net/core/filter.c
index 2ec162dd83c4..1c6c07507a78 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -5222,6 +5222,25 @@ static const struct bpf_func_proto bpf_get_socket_uid_proto = {
 	.arg1_type      = ARG_PTR_TO_CTX,
 };
 
+static int sk_bpf_set_get_cb_flags(struct sock *sk, char *optval, bool getopt)
+{
+	u32 sk_bpf_cb_flags;
+
+	if (getopt) {
+		*(u32 *)optval = sk->sk_bpf_cb_flags;
+		return 0;
+	}
+
+	sk_bpf_cb_flags = *(u32 *)optval;
+
+	if (sk_bpf_cb_flags & ~SK_BPF_CB_MASK)
+		return -EINVAL;
+
+	sk->sk_bpf_cb_flags = sk_bpf_cb_flags;
+
+	return 0;
+}
+
 static int sol_socket_sockopt(struct sock *sk, int optname,
 			      char *optval, int *optlen,
 			      bool getopt)
@@ -5238,6 +5257,7 @@ static int sol_socket_sockopt(struct sock *sk, int optname,
 	case SO_MAX_PACING_RATE:
 	case SO_BINDTOIFINDEX:
 	case SO_TXREHASH:
+	case SK_BPF_CB_FLAGS:
 		if (*optlen != sizeof(int))
 			return -EINVAL;
 		break;
@@ -5247,6 +5267,9 @@ static int sol_socket_sockopt(struct sock *sk, int optname,
 		return -EINVAL;
 	}
 
+	if (optname == SK_BPF_CB_FLAGS)
+		return sk_bpf_set_get_cb_flags(sk, optval, getopt);
+
 	if (getopt) {
 		if (optname == SO_BINDTODEVICE)
 			return -EINVAL;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 2acf9b336371..70366f74ef4e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -7091,6 +7091,7 @@ enum {
 	TCP_BPF_SYN_IP		= 1006, /* Copy the IP[46] and TCP header */
 	TCP_BPF_SYN_MAC         = 1007, /* Copy the MAC, IP[46], and TCP header */
 	TCP_BPF_SOCK_OPS_CB_FLAGS = 1008, /* Get or Set TCP sock ops flags */
+	SK_BPF_CB_FLAGS		= 1009, /* Used to set socket bpf flags */
 };
 
 enum {
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 02/12] bpf: prepare the sock_ops ctx and call bpf prog for TX timestamping
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 01/12] bpf: add networking timestamping support to bpf_get/setsockopt() Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 03/12] bpf: prevent unsafe access to the sock fields in the BPF timestamping callback Jason Xing
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

This patch introduces a new bpf_skops_tx_timestamping() function
that prepares the "struct bpf_sock_ops" ctx and then executes the
sockops BPF program.

The subsequent patch will utilize bpf_skops_tx_timestamping() at
the existing TX timestamping kernel callbacks (__sk_tstamp_tx
specifically) to call the sockops BPF program. Later, four callback
points to report information to user space based on this patch will
be introduced.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/net/sock.h |  7 +++++++
 net/core/sock.c    | 14 ++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/include/net/sock.h b/include/net/sock.h
index 7916982343c6..6f4d54faba92 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2923,6 +2923,13 @@ int sock_set_timestamping(struct sock *sk, int optname,
 			  struct so_timestamping timestamping);
 
 void sock_enable_timestamps(struct sock *sk);
+#if defined(CONFIG_CGROUP_BPF)
+void bpf_skops_tx_timestamping(struct sock *sk, struct sk_buff *skb, int op);
+#else
+static inline void bpf_skops_tx_timestamping(struct sock *sk, struct sk_buff *skb, int op)
+{
+}
+#endif
 void sock_no_linger(struct sock *sk);
 void sock_set_keepalive(struct sock *sk);
 void sock_set_priority(struct sock *sk, u32 priority);
diff --git a/net/core/sock.c b/net/core/sock.c
index eae2ae70a2e0..bde45569d4da 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -948,6 +948,20 @@ int sock_set_timestamping(struct sock *sk, int optname,
 	return 0;
 }
 
+#if defined(CONFIG_CGROUP_BPF)
+void bpf_skops_tx_timestamping(struct sock *sk, struct sk_buff *skb, int op)
+{
+	struct bpf_sock_ops_kern sock_ops;
+
+	memset(&sock_ops, 0, offsetof(struct bpf_sock_ops_kern, temp));
+	sock_ops.op = op;
+	sock_ops.is_fullsock = 1;
+	sock_ops.sk = sk;
+	bpf_skops_init_skb(&sock_ops, skb, 0);
+	__cgroup_bpf_run_filter_sock_ops(sk, &sock_ops, CGROUP_SOCK_OPS);
+}
+#endif
+
 void sock_set_keepalive(struct sock *sk)
 {
 	lock_sock(sk);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 03/12] bpf: prevent unsafe access to the sock fields in the BPF timestamping callback
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 01/12] bpf: add networking timestamping support to bpf_get/setsockopt() Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 02/12] bpf: prepare the sock_ops ctx and call bpf prog for TX timestamping Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 04/12] bpf: disable unsafe helpers in TX timestamping callbacks Jason Xing
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

The subsequent patch will implement BPF TX timestamping. It will
call the sockops BPF program without holding the sock lock.

This breaks the current assumption that all sock ops programs will
hold the sock lock. The sock's fields of the uapi's bpf_sock_ops
requires this assumption.

To address this, a new "u8 is_locked_tcp_sock;" field is added. This
patch sets it in the current sock_ops callbacks. The "is_fullsock"
test is then replaced by the "is_locked_tcp_sock" test during
sock_ops_convert_ctx_access().

The new TX timestamping callbacks added in the subsequent patch will
not have this set. This will prevent unsafe access from the new
timestamping callbacks.

Potentially, we could allow read-only access. However, this would
require identifying which callback is read-safe-only and also requires
additional BPF instruction rewrites in the covert_ctx. Since the BPF
program can always read everything from a socket (e.g., by using
bpf_core_cast), this patch keeps it simple and disables all read
and write access to any socket fields through the bpf_sock_ops
UAPI from the new TX timestamping callback.

Moreover, note that some of the fields in bpf_sock_ops are specific
to tcp_sock, and sock_ops currently only supports tcp_sock. In
the future, UDP timestamping will be added, which will also break
this assumption. The same idea used in this patch will be reused.
Considering that the current sock_ops only supports tcp_sock, the
variable is named is_locked_"tcp"_sock.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/linux/filter.h | 1 +
 include/net/tcp.h      | 1 +
 net/core/filter.c      | 8 ++++----
 net/ipv4/tcp_input.c   | 2 ++
 net/ipv4/tcp_output.c  | 2 ++
 5 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index a3ea46281595..d36d5d5180b1 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1508,6 +1508,7 @@ struct bpf_sock_ops_kern {
 	void	*skb_data_end;
 	u8	op;
 	u8	is_fullsock;
+	u8	is_locked_tcp_sock;
 	u8	remaining_opt_len;
 	u64	temp;			/* temp and everything after is not
 					 * initialized to 0 before calling
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 5b2b04835688..4c4dca59352b 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -2649,6 +2649,7 @@ static inline int tcp_call_bpf(struct sock *sk, int op, u32 nargs, u32 *args)
 	memset(&sock_ops, 0, offsetof(struct bpf_sock_ops_kern, temp));
 	if (sk_fullsock(sk)) {
 		sock_ops.is_fullsock = 1;
+		sock_ops.is_locked_tcp_sock = 1;
 		sock_owned_by_me(sk);
 	}
 
diff --git a/net/core/filter.c b/net/core/filter.c
index 1c6c07507a78..8631036f6b64 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -10381,10 +10381,10 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 		}							      \
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(			      \
 						struct bpf_sock_ops_kern,     \
-						is_fullsock),		      \
+						is_locked_tcp_sock),	      \
 				      fullsock_reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
-					       is_fullsock));		      \
+					       is_locked_tcp_sock));	      \
 		*insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp);	      \
 		if (si->dst_reg == si->src_reg)				      \
 			*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg,	      \
@@ -10469,10 +10469,10 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 					       temp));			      \
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(			      \
 						struct bpf_sock_ops_kern,     \
-						is_fullsock),		      \
+						is_locked_tcp_sock),	      \
 				      reg, si->dst_reg,			      \
 				      offsetof(struct bpf_sock_ops_kern,      \
-					       is_fullsock));		      \
+					       is_locked_tcp_sock));	      \
 		*insn++ = BPF_JMP_IMM(BPF_JEQ, reg, 0, 2);		      \
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(			      \
 						struct bpf_sock_ops_kern, sk),\
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index eb82e01da911..95733dcdfb4b 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -169,6 +169,7 @@ static void bpf_skops_parse_hdr(struct sock *sk, struct sk_buff *skb)
 	memset(&sock_ops, 0, offsetof(struct bpf_sock_ops_kern, temp));
 	sock_ops.op = BPF_SOCK_OPS_PARSE_HDR_OPT_CB;
 	sock_ops.is_fullsock = 1;
+	sock_ops.is_locked_tcp_sock = 1;
 	sock_ops.sk = sk;
 	bpf_skops_init_skb(&sock_ops, skb, tcp_hdrlen(skb));
 
@@ -185,6 +186,7 @@ static void bpf_skops_established(struct sock *sk, int bpf_op,
 	memset(&sock_ops, 0, offsetof(struct bpf_sock_ops_kern, temp));
 	sock_ops.op = bpf_op;
 	sock_ops.is_fullsock = 1;
+	sock_ops.is_locked_tcp_sock = 1;
 	sock_ops.sk = sk;
 	/* sk with TCP_REPAIR_ON does not have skb in tcp_finish_connect */
 	if (skb)
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index bc95d2a5924f..a0e779bdbc6b 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -525,6 +525,7 @@ static void bpf_skops_hdr_opt_len(struct sock *sk, struct sk_buff *skb,
 		sock_owned_by_me(sk);
 
 		sock_ops.is_fullsock = 1;
+		sock_ops.is_locked_tcp_sock = 1;
 		sock_ops.sk = sk;
 	}
 
@@ -570,6 +571,7 @@ static void bpf_skops_write_hdr_opt(struct sock *sk, struct sk_buff *skb,
 		sock_owned_by_me(sk);
 
 		sock_ops.is_fullsock = 1;
+		sock_ops.is_locked_tcp_sock = 1;
 		sock_ops.sk = sk;
 	}
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 04/12] bpf: disable unsafe helpers in TX timestamping callbacks
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (2 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 03/12] bpf: prevent unsafe access to the sock fields in the BPF timestamping callback Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 05/12] net-timestamp: prepare for isolating two modes of SO_TIMESTAMPING Jason Xing
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

New TX timestamping sock_ops callbacks will be added in the
subsequent patch. Some of the existing BPF helpers will not
be safe to be used in the TX timestamping callbacks.

The bpf_sock_ops_setsockopt, bpf_sock_ops_getsockopt, and
bpf_sock_ops_cb_flags_set require owning the sock lock. TX
timestamping callbacks will not own the lock.

The bpf_sock_ops_load_hdr_opt needs the skb->data pointing
to the TCP header. This will not be true in the TX timestamping
callbacks.

At the beginning of these helpers, this patch checks the
bpf_sock->op to ensure these helpers are used by the existing
sock_ops callbacks only.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 net/core/filter.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/net/core/filter.c b/net/core/filter.c
index 8631036f6b64..7f56d0bbeb00 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -5523,6 +5523,11 @@ static int __bpf_setsockopt(struct sock *sk, int level, int optname,
 	return -EINVAL;
 }
 
+static bool is_locked_tcp_sock_ops(struct bpf_sock_ops_kern *bpf_sock)
+{
+	return bpf_sock->op <= BPF_SOCK_OPS_WRITE_HDR_OPT_CB;
+}
+
 static int _bpf_setsockopt(struct sock *sk, int level, int optname,
 			   char *optval, int optlen)
 {
@@ -5673,6 +5678,9 @@ static const struct bpf_func_proto bpf_sock_addr_getsockopt_proto = {
 BPF_CALL_5(bpf_sock_ops_setsockopt, struct bpf_sock_ops_kern *, bpf_sock,
 	   int, level, int, optname, char *, optval, int, optlen)
 {
+	if (!is_locked_tcp_sock_ops(bpf_sock))
+		return -EOPNOTSUPP;
+
 	return _bpf_setsockopt(bpf_sock->sk, level, optname, optval, optlen);
 }
 
@@ -5758,6 +5766,9 @@ static int bpf_sock_ops_get_syn(struct bpf_sock_ops_kern *bpf_sock,
 BPF_CALL_5(bpf_sock_ops_getsockopt, struct bpf_sock_ops_kern *, bpf_sock,
 	   int, level, int, optname, char *, optval, int, optlen)
 {
+	if (!is_locked_tcp_sock_ops(bpf_sock))
+		return -EOPNOTSUPP;
+
 	if (IS_ENABLED(CONFIG_INET) && level == SOL_TCP &&
 	    optname >= TCP_BPF_SYN && optname <= TCP_BPF_SYN_MAC) {
 		int ret, copy_len = 0;
@@ -5800,6 +5811,9 @@ BPF_CALL_2(bpf_sock_ops_cb_flags_set, struct bpf_sock_ops_kern *, bpf_sock,
 	struct sock *sk = bpf_sock->sk;
 	int val = argval & BPF_SOCK_OPS_ALL_CB_FLAGS;
 
+	if (!is_locked_tcp_sock_ops(bpf_sock))
+		return -EOPNOTSUPP;
+
 	if (!IS_ENABLED(CONFIG_INET) || !sk_fullsock(sk))
 		return -EINVAL;
 
@@ -7609,6 +7623,9 @@ BPF_CALL_4(bpf_sock_ops_load_hdr_opt, struct bpf_sock_ops_kern *, bpf_sock,
 	u8 search_kind, search_len, copy_len, magic_len;
 	int ret;
 
+	if (!is_locked_tcp_sock_ops(bpf_sock))
+		return -EOPNOTSUPP;
+
 	/* 2 byte is the minimal option len except TCPOPT_NOP and
 	 * TCPOPT_EOL which are useless for the bpf prog to learn
 	 * and this helper disallow loading them also.
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 05/12] net-timestamp: prepare for isolating two modes of SO_TIMESTAMPING
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (3 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 04/12] bpf: disable unsafe helpers in TX timestamping callbacks Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 06/12] bpf: add BPF_SOCK_OPS_TS_SCHED_OPT_CB callback Jason Xing
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

No functional changes here. Only add test to see if the orig_skb
matches the usage of application SO_TIMESTAMPING.

In this series, bpf timestamping and previous socket timestamping
are implemented in the same function __skb_tstamp_tx(). To test
the socket enables socket timestamping feature, this function
skb_tstamp_tx_report_so_timestamping() is added.

In the next patch, another check for bpf timestamping feature
will be introduced just like the above report function, namely,
skb_tstamp_tx_report_bpf_timestamping(). Then users will be able
to know the socket enables either or both of features.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 net/core/skbuff.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index a441613a1e6c..cd742dcad052 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5539,6 +5539,23 @@ void skb_complete_tx_timestamp(struct sk_buff *skb,
 }
 EXPORT_SYMBOL_GPL(skb_complete_tx_timestamp);
 
+static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
+						 struct skb_shared_hwtstamps *hwts,
+						 int tstype)
+{
+	switch (tstype) {
+	case SCM_TSTAMP_SCHED:
+		return skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP;
+	case SCM_TSTAMP_SND:
+		return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP :
+						    SKBTX_SW_TSTAMP);
+	case SCM_TSTAMP_ACK:
+		return TCP_SKB_CB(skb)->txstamp_ack;
+	}
+
+	return false;
+}
+
 void __skb_tstamp_tx(struct sk_buff *orig_skb,
 		     const struct sk_buff *ack_skb,
 		     struct skb_shared_hwtstamps *hwtstamps,
@@ -5551,6 +5568,9 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
 	if (!sk)
 		return;
 
+	if (!skb_tstamp_tx_report_so_timestamping(orig_skb, hwtstamps, tstype))
+		return;
+
 	tsflags = READ_ONCE(sk->sk_tsflags);
 	if (!hwtstamps && !(tsflags & SOF_TIMESTAMPING_OPT_TX_SWHW) &&
 	    skb_shinfo(orig_skb)->tx_flags & SKBTX_IN_PROGRESS)
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 06/12] bpf: add BPF_SOCK_OPS_TS_SCHED_OPT_CB callback
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (4 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 05/12] net-timestamp: prepare for isolating two modes of SO_TIMESTAMPING Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback Jason Xing
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

Support SCM_TSTAMP_SCHED case for bpf timestamping.

Add a new sock_ops callback, BPF_SOCK_OPS_TS_SCHED_OPT_CB. This
callback will occur at the same timestamping point as the user
space's SCM_TSTAMP_SCHED. The BPF program can use it to get the
same SCM_TSTAMP_SCHED timestamp without modifying the user-space
application.

A new SKBTX_BPF flag is added to mark skb_shinfo(skb)->tx_flags,
ensuring that the new BPF timestamping and the current user
space's SO_TIMESTAMPING do not interfere with each other.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/linux/skbuff.h         |  6 +++++-
 include/uapi/linux/bpf.h       |  4 ++++
 net/core/dev.c                 |  3 ++-
 net/core/skbuff.c              | 20 ++++++++++++++++++++
 tools/include/uapi/linux/bpf.h |  4 ++++
 5 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index bb2b751d274a..52f6e033e704 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -489,10 +489,14 @@ enum {
 
 	/* generate software time stamp when entering packet scheduling */
 	SKBTX_SCHED_TSTAMP = 1 << 6,
+
+	/* used for bpf extension when a bpf program is loaded */
+	SKBTX_BPF = 1 << 7,
 };
 
 #define SKBTX_ANY_SW_TSTAMP	(SKBTX_SW_TSTAMP    | \
-				 SKBTX_SCHED_TSTAMP)
+				 SKBTX_SCHED_TSTAMP | \
+				 SKBTX_BPF)
 #define SKBTX_ANY_TSTAMP	(SKBTX_HW_TSTAMP | \
 				 SKBTX_HW_TSTAMP_USE_CYCLES | \
 				 SKBTX_ANY_SW_TSTAMP)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index fa666d51dffe..68664ececdc0 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -7035,6 +7035,10 @@ enum {
 					 * by the kernel or the
 					 * earlier bpf-progs.
 					 */
+	BPF_SOCK_OPS_TS_SCHED_OPT_CB,	/* Called when skb is passing through
+					 * dev layer when SK_BPF_CB_TX_TIMESTAMPING
+					 * feature is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
diff --git a/net/core/dev.c b/net/core/dev.c
index c0021cbd28fc..cbbde68c17cb 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4500,7 +4500,8 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
 	skb_reset_mac_header(skb);
 	skb_assert_len(skb);
 
-	if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP))
+	if (unlikely(skb_shinfo(skb)->tx_flags &
+		     (SKBTX_SCHED_TSTAMP | SKBTX_BPF)))
 		__skb_tstamp_tx(skb, NULL, NULL, skb->sk, SCM_TSTAMP_SCHED);
 
 	/* Disable soft irqs for various locks below. Also
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index cd742dcad052..7bac5e950e3d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5556,6 +5556,23 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
 	return false;
 }
 
+static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
+						  struct sock *sk,
+						  int tstype)
+{
+	int op;
+
+	switch (tstype) {
+	case SCM_TSTAMP_SCHED:
+		op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
+		break;
+	default:
+		return;
+	}
+
+	bpf_skops_tx_timestamping(sk, skb, op);
+}
+
 void __skb_tstamp_tx(struct sk_buff *orig_skb,
 		     const struct sk_buff *ack_skb,
 		     struct skb_shared_hwtstamps *hwtstamps,
@@ -5568,6 +5585,9 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
 	if (!sk)
 		return;
 
+	if (skb_shinfo(orig_skb)->tx_flags & SKBTX_BPF)
+		skb_tstamp_tx_report_bpf_timestamping(orig_skb, sk, tstype);
+
 	if (!skb_tstamp_tx_report_so_timestamping(orig_skb, hwtstamps, tstype))
 		return;
 
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 70366f74ef4e..eed91b7296b7 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -7025,6 +7025,10 @@ enum {
 					 * by the kernel or the
 					 * earlier bpf-progs.
 					 */
+	BPF_SOCK_OPS_TS_SCHED_OPT_CB,	/* Called when skb is passing through
+					 * dev layer when SK_BPF_CB_TX_TIMESTAMPING
+					 * feature is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (5 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 06/12] bpf: add BPF_SOCK_OPS_TS_SCHED_OPT_CB callback Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12 23:18   ` Martin KaFai Lau
  2025-02-12  6:18 ` [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback Jason Xing
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

Support sw SCM_TSTAMP_SND case for bpf timestamping.

Add a new sock_ops callback, BPF_SOCK_OPS_TS_SW_OPT_CB. This
callback will occur at the same timestamping point as the user
space's software SCM_TSTAMP_SND. The BPF program can use it to
get the same SCM_TSTAMP_SND timestamp without modifying the
user-space application.

Based on this patch, BPF program will get the software
timestamp when the driver is ready to send the skb. In the
sebsequent patch, the hardware timestamp will be supported.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/linux/skbuff.h         | 2 +-
 include/uapi/linux/bpf.h       | 4 ++++
 net/core/skbuff.c              | 9 ++++++++-
 tools/include/uapi/linux/bpf.h | 4 ++++
 4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 52f6e033e704..76582500c5ea 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -4568,7 +4568,7 @@ void skb_tstamp_tx(struct sk_buff *orig_skb,
 static inline void skb_tx_timestamp(struct sk_buff *skb)
 {
 	skb_clone_tx_timestamp(skb);
-	if (skb_shinfo(skb)->tx_flags & SKBTX_SW_TSTAMP)
+	if (skb_shinfo(skb)->tx_flags & (SKBTX_SW_TSTAMP | SKBTX_BPF))
 		skb_tstamp_tx(skb, NULL);
 }
 
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 68664ececdc0..b3bd92281084 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -7039,6 +7039,10 @@ enum {
 					 * dev layer when SK_BPF_CB_TX_TIMESTAMPING
 					 * feature is on.
 					 */
+	BPF_SOCK_OPS_TS_SW_OPT_CB,	/* Called when skb is about to send
+					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
+					 * feature is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 7bac5e950e3d..d80d2137692f 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5557,6 +5557,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
 }
 
 static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
+						  struct skb_shared_hwtstamps *hwts,
 						  struct sock *sk,
 						  int tstype)
 {
@@ -5566,6 +5567,11 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
 	case SCM_TSTAMP_SCHED:
 		op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
 		break;
+	case SCM_TSTAMP_SND:
+		if (hwts)
+			return;
+		op = BPF_SOCK_OPS_TS_SW_OPT_CB;
+		break;
 	default:
 		return;
 	}
@@ -5586,7 +5592,8 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
 		return;
 
 	if (skb_shinfo(orig_skb)->tx_flags & SKBTX_BPF)
-		skb_tstamp_tx_report_bpf_timestamping(orig_skb, sk, tstype);
+		skb_tstamp_tx_report_bpf_timestamping(orig_skb, hwtstamps,
+						      sk, tstype);
 
 	if (!skb_tstamp_tx_report_so_timestamping(orig_skb, hwtstamps, tstype))
 		return;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index eed91b7296b7..9bd1c7c77b17 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -7029,6 +7029,10 @@ enum {
 					 * dev layer when SK_BPF_CB_TX_TIMESTAMPING
 					 * feature is on.
 					 */
+	BPF_SOCK_OPS_TS_SW_OPT_CB,	/* Called when skb is about to send
+					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
+					 * feature is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (6 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12 23:20   ` Martin KaFai Lau
  2025-02-12  6:18 ` [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback Jason Xing
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

Support hw SCM_TSTAMP_SND case for bpf timestamping.

Add a new sock_ops callback, BPF_SOCK_OPS_TS_HW_OPT_CB. This
callback will occur at the same timestamping point as the user
space's hardware SCM_TSTAMP_SND. The BPF program can use it to
get the same SCM_TSTAMP_SND timestamp without modifying the
user-space application.

To avoid increase the code complexity, replace SKBTX_HW_TSTAMP
with SKBTX_HW_TSTAMP_NOBPF instead of changing numerous callers
from driver side using SKBTX_HW_TSTAMP. The new definition of
SKBTX_HW_TSTAMP means the combination tests of socket timestamping
and bpf timestamping. After this patch, drivers can work under the
bpf timestamping.

Considering some drivers doesn't assign the skb with hardware
timestamp, this patch do the assignment and then BPF program
can acquire the hwstamp from skb directly.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/linux/skbuff.h         | 4 +++-
 include/uapi/linux/bpf.h       | 4 ++++
 net/core/skbuff.c              | 6 +++---
 tools/include/uapi/linux/bpf.h | 4 ++++
 4 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 76582500c5ea..0b4f1889500d 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -470,7 +470,7 @@ struct skb_shared_hwtstamps {
 /* Definitions for tx_flags in struct skb_shared_info */
 enum {
 	/* generate hardware time stamp */
-	SKBTX_HW_TSTAMP = 1 << 0,
+	SKBTX_HW_TSTAMP_NOBPF = 1 << 0,
 
 	/* generate software time stamp when queueing packet to NIC */
 	SKBTX_SW_TSTAMP = 1 << 1,
@@ -494,6 +494,8 @@ enum {
 	SKBTX_BPF = 1 << 7,
 };
 
+#define SKBTX_HW_TSTAMP		(SKBTX_HW_TSTAMP_NOBPF | SKBTX_BPF)
+
 #define SKBTX_ANY_SW_TSTAMP	(SKBTX_SW_TSTAMP    | \
 				 SKBTX_SCHED_TSTAMP | \
 				 SKBTX_BPF)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index b3bd92281084..f70edd067edf 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -7043,6 +7043,10 @@ enum {
 					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
 					 * feature is on.
 					 */
+	BPF_SOCK_OPS_TS_HW_OPT_CB,	/* Called in hardware phase when
+					 * SK_BPF_CB_TX_TIMESTAMPING feature
+					 * is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index d80d2137692f..4930c43ee77b 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5547,7 +5547,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
 	case SCM_TSTAMP_SCHED:
 		return skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP;
 	case SCM_TSTAMP_SND:
-		return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP :
+		return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP_NOBPF :
 						    SKBTX_SW_TSTAMP);
 	case SCM_TSTAMP_ACK:
 		return TCP_SKB_CB(skb)->txstamp_ack;
@@ -5568,9 +5568,9 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
 		op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
 		break;
 	case SCM_TSTAMP_SND:
+		op = hwts ? BPF_SOCK_OPS_TS_HW_OPT_CB : BPF_SOCK_OPS_TS_SW_OPT_CB;
 		if (hwts)
-			return;
-		op = BPF_SOCK_OPS_TS_SW_OPT_CB;
+			*skb_hwtstamps(skb) = *hwts;
 		break;
 	default:
 		return;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 9bd1c7c77b17..7b9652ce7e3c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -7033,6 +7033,10 @@ enum {
 					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
 					 * feature is on.
 					 */
+	BPF_SOCK_OPS_TS_HW_OPT_CB,	/* Called in hardware phase when
+					 * SK_BPF_CB_TX_TIMESTAMPING feature
+					 * is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (7 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12 15:26   ` Willem de Bruijn
  2025-02-12  6:18 ` [PATCH bpf-next v10 10/12] bpf: add BPF_SOCK_OPS_TS_SND_CB callback Jason Xing
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

Support the ACK case for bpf timestamping.

Add a new sock_ops callback, BPF_SOCK_OPS_TS_ACK_OPT_CB. This
callback will occur at the same timestamping point as the user
space's SCM_TSTAMP_ACK. The BPF program can use it to get the
same SCM_TSTAMP_ACK timestamp without modifying the user-space
application.

This patch extends txstamp_ack to two bits: 1 stands for
SO_TIMESTAMPING mode, 2 bpf extension.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/net/tcp.h              | 6 ++++--
 include/uapi/linux/bpf.h       | 5 +++++
 net/core/skbuff.c              | 5 ++++-
 net/dsa/user.c                 | 2 +-
 net/ipv4/tcp.c                 | 2 +-
 net/socket.c                   | 2 +-
 tools/include/uapi/linux/bpf.h | 5 +++++
 7 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 4c4dca59352b..2e2fc72e115b 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -958,10 +958,12 @@ struct tcp_skb_cb {
 
 	__u8		sacked;		/* State flags for SACK.	*/
 	__u8		ip_dsfield;	/* IPv4 tos or IPv6 dsfield	*/
-	__u8		txstamp_ack:1,	/* Record TX timestamp for ack? */
+#define TSTAMP_ACK_SK	0x1
+#define TSTAMP_ACK_BPF	0x2
+	__u8		txstamp_ack:2,	/* Record TX timestamp for ack? */
 			eor:1,		/* Is skb MSG_EOR marked? */
 			has_rxtstamp:1,	/* SKB has a RX timestamp	*/
-			unused:5;
+			unused:4;
 	__u32		ack_seq;	/* Sequence number ACK'd	*/
 	union {
 		struct {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index f70edd067edf..9355d617767f 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -7047,6 +7047,11 @@ enum {
 					 * SK_BPF_CB_TX_TIMESTAMPING feature
 					 * is on.
 					 */
+	BPF_SOCK_OPS_TS_ACK_OPT_CB,	/* Called when all the skbs in the
+					 * same sendmsg call are acked
+					 * when SK_BPF_CB_TX_TIMESTAMPING
+					 * feature is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 4930c43ee77b..9f01dde12e3a 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -5550,7 +5550,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
 		return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP_NOBPF :
 						    SKBTX_SW_TSTAMP);
 	case SCM_TSTAMP_ACK:
-		return TCP_SKB_CB(skb)->txstamp_ack;
+		return TCP_SKB_CB(skb)->txstamp_ack & TSTAMP_ACK_SK;
 	}
 
 	return false;
@@ -5572,6 +5572,9 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
 		if (hwts)
 			*skb_hwtstamps(skb) = *hwts;
 		break;
+	case SCM_TSTAMP_ACK:
+		op = BPF_SOCK_OPS_TS_ACK_OPT_CB;
+		break;
 	default:
 		return;
 	}
diff --git a/net/dsa/user.c b/net/dsa/user.c
index 291ab1b4acc4..794fe553dd77 100644
--- a/net/dsa/user.c
+++ b/net/dsa/user.c
@@ -897,7 +897,7 @@ static void dsa_skb_tx_timestamp(struct dsa_user_priv *p,
 {
 	struct dsa_switch *ds = p->dp->ds;
 
-	if (!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
+	if (!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP_NOBPF))
 		return;
 
 	if (!ds->ops->port_txtstamp)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 0d704bda6c41..aa080f7ccea4 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -488,7 +488,7 @@ static void tcp_tx_timestamp(struct sock *sk, struct sockcm_cookie *sockc)
 
 		sock_tx_timestamp(sk, sockc, &shinfo->tx_flags);
 		if (tsflags & SOF_TIMESTAMPING_TX_ACK)
-			tcb->txstamp_ack = 1;
+			tcb->txstamp_ack = TSTAMP_ACK_SK;
 		if (tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK)
 			shinfo->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
 	}
diff --git a/net/socket.c b/net/socket.c
index 262a28b59c7f..517de433d4bb 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -676,7 +676,7 @@ void __sock_tx_timestamp(__u32 tsflags, __u8 *tx_flags)
 	u8 flags = *tx_flags;
 
 	if (tsflags & SOF_TIMESTAMPING_TX_HARDWARE) {
-		flags |= SKBTX_HW_TSTAMP;
+		flags |= SKBTX_HW_TSTAMP_NOBPF;
 
 		/* PTP hardware clocks can provide a free running cycle counter
 		 * as a time base for virtual clocks. Tell driver to use the
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 7b9652ce7e3c..d3e2988b3b4c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -7037,6 +7037,11 @@ enum {
 					 * SK_BPF_CB_TX_TIMESTAMPING feature
 					 * is on.
 					 */
+	BPF_SOCK_OPS_TS_ACK_OPT_CB,	/* Called when all the skbs in the
+					 * same sendmsg call are acked
+					 * when SK_BPF_CB_TX_TIMESTAMPING
+					 * feature is on.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 10/12] bpf: add BPF_SOCK_OPS_TS_SND_CB callback
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (8 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping Jason Xing
  2025-02-12  6:18 ` [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature Jason Xing
  11 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

This patch introduces a new callback in tcp_tx_timestamp() to correlate
tcp_sendmsg timestamp with timestamps from other tx timestamping
callbacks (e.g., SND/SW/ACK).

Without this patch, BPF program wouldn't know which timestamps belong
to which flow because of no socket lock protection. This new callback
is inserted in tcp_tx_timestamp() to address this issue because
tcp_tx_timestamp() still owns the same socket lock with
tcp_sendmsg_locked() in the meanwhile tcp_tx_timestamp() initializes
the timestamping related fields for the skb, especially tskey. The
tskey is the bridge to do the correlation.

For TCP, BPF program hooks the beginning of tcp_sendmsg_locked() and
then stores the sendmsg timestamp at the bpf_sk_storage, correlating
this timestamp with its tskey that are later used in other sending
timestamping callbacks.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 include/uapi/linux/bpf.h       | 5 +++++
 net/ipv4/tcp.c                 | 4 ++++
 tools/include/uapi/linux/bpf.h | 5 +++++
 3 files changed, 14 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 9355d617767f..86fca729fbd8 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -7052,6 +7052,11 @@ enum {
 					 * when SK_BPF_CB_TX_TIMESTAMPING
 					 * feature is on.
 					 */
+	BPF_SOCK_OPS_TS_SND_CB,		/* Called when every sendmsg syscall
+					 * is triggered. It's used to correlate
+					 * sendmsg timestamp with corresponding
+					 * tskey.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index aa080f7ccea4..54424cd20557 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -492,6 +492,10 @@ static void tcp_tx_timestamp(struct sock *sk, struct sockcm_cookie *sockc)
 		if (tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK)
 			shinfo->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
 	}
+
+	if (cgroup_bpf_enabled(CGROUP_SOCK_OPS) &&
+	    SK_BPF_CB_FLAG_TEST(sk, SK_BPF_CB_TX_TIMESTAMPING) && skb)
+		bpf_skops_tx_timestamping(sk, skb, BPF_SOCK_OPS_TS_SND_CB);
 }
 
 static bool tcp_stream_is_readable(struct sock *sk, int target)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d3e2988b3b4c..2739ee0154a0 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -7042,6 +7042,11 @@ enum {
 					 * when SK_BPF_CB_TX_TIMESTAMPING
 					 * feature is on.
 					 */
+	BPF_SOCK_OPS_TS_SND_CB,		/* Called when every sendmsg syscall
+					 * is triggered. It's used to correlate
+					 * sendmsg timestamp with corresponding
+					 * tskey.
+					 */
 };
 
 /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (9 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 10/12] bpf: add BPF_SOCK_OPS_TS_SND_CB callback Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-12 23:49   ` Martin KaFai Lau
  2025-02-12  6:18 ` [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature Jason Xing
  11 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

Add the bpf_sock_ops_enable_tx_tstamp kfunc to allow BPF programs to
selectively enable TX timestamping on a skb during tcp_sendmsg().

For example, BPF program will limit tracking X numbers of packets
and then will stop there instead of tracing all the sendmsgs of
matched flow all along. It would be helpful for users who cannot
afford to calculate latencies from every sendmsg call probably
due to the performance or storage space consideration.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 kernel/bpf/btf.c  |  1 +
 net/core/filter.c | 32 +++++++++++++++++++++++++++++++-
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 9433b6467bbe..740210f883dc 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -8522,6 +8522,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
 	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
 	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
 	case BPF_PROG_TYPE_CGROUP_SYSCTL:
+	case BPF_PROG_TYPE_SOCK_OPS:
 		return BTF_KFUNC_HOOK_CGROUP;
 	case BPF_PROG_TYPE_SCHED_ACT:
 		return BTF_KFUNC_HOOK_SCHED_ACT;
diff --git a/net/core/filter.c b/net/core/filter.c
index 7f56d0bbeb00..36793c68b125 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -12102,6 +12102,26 @@ __bpf_kfunc int bpf_sk_assign_tcp_reqsk(struct __sk_buff *s, struct sock *sk,
 #endif
 }
 
+__bpf_kfunc int bpf_sock_ops_enable_tx_tstamp(struct bpf_sock_ops_kern *skops,
+					      u64 flags)
+{
+	struct sk_buff *skb;
+	struct sock *sk;
+
+	if (skops->op != BPF_SOCK_OPS_TS_SND_CB)
+		return -EOPNOTSUPP;
+
+	skb = skops->skb;
+	sk = skops->sk;
+	skb_shinfo(skb)->tx_flags |= SKBTX_BPF;
+	if (sk_is_tcp(sk)) {
+		TCP_SKB_CB(skb)->txstamp_ack |= TSTAMP_ACK_BPF;
+		skb_shinfo(skb)->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
+	}
+
+	return 0;
+}
+
 __bpf_kfunc_end_defs();
 
 int bpf_dynptr_from_skb_rdonly(struct __sk_buff *skb, u64 flags,
@@ -12135,6 +12155,10 @@ BTF_KFUNCS_START(bpf_kfunc_check_set_tcp_reqsk)
 BTF_ID_FLAGS(func, bpf_sk_assign_tcp_reqsk, KF_TRUSTED_ARGS)
 BTF_KFUNCS_END(bpf_kfunc_check_set_tcp_reqsk)
 
+BTF_KFUNCS_START(bpf_kfunc_check_set_sock_ops)
+BTF_ID_FLAGS(func, bpf_sock_ops_enable_tx_tstamp, KF_TRUSTED_ARGS)
+BTF_KFUNCS_END(bpf_kfunc_check_set_sock_ops)
+
 static const struct btf_kfunc_id_set bpf_kfunc_set_skb = {
 	.owner = THIS_MODULE,
 	.set = &bpf_kfunc_check_set_skb,
@@ -12155,6 +12179,11 @@ static const struct btf_kfunc_id_set bpf_kfunc_set_tcp_reqsk = {
 	.set = &bpf_kfunc_check_set_tcp_reqsk,
 };
 
+static const struct btf_kfunc_id_set bpf_kfunc_set_sock_ops = {
+	.owner = THIS_MODULE,
+	.set = &bpf_kfunc_check_set_sock_ops,
+};
+
 static int __init bpf_kfunc_init(void)
 {
 	int ret;
@@ -12173,7 +12202,8 @@ static int __init bpf_kfunc_init(void)
 	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
 	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
 					       &bpf_kfunc_set_sock_addr);
-	return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_tcp_reqsk);
+	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_tcp_reqsk);
+	return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SOCK_OPS, &bpf_kfunc_set_sock_ops);
 }
 late_initcall(bpf_kfunc_init);
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature
  2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
                   ` (10 preceding siblings ...)
  2025-02-12  6:18 ` [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping Jason Xing
@ 2025-02-12  6:18 ` Jason Xing
  2025-02-13  1:08   ` Martin KaFai Lau
  11 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-12  6:18 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, martin.lau, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	ykolal
  Cc: bpf, netdev, Jason Xing

BPF program calculates a couple of latency deltas between each tx
timestamping callbacks. It can be used in the real world to diagnose
the kernel behaviour in the tx path.

Check the safety issues by accessing a few bpf calls in
bpf_test_access_bpf_calls() which are implemented in the patch 3 and 4.

Check if the bpf timestamping can co-exist with socket timestamping.

There remains a few realistic things[1][2] to highlight:
1. in general a packet may pass through multiple qdiscs. For instance
with bonding or tunnel virtual devices in the egress path.
2. packets may be resent, in which case an ACK might precede a repeat
SCHED and SND.
3. erroneous or malicious peers may also just never send an ACK.

[1]: https://lore.kernel.org/all/67a389af981b0_14e0832949d@willemb.c.googlers.com.notmuch/
[2]: https://lore.kernel.org/all/c329a0c1-239b-4ca1-91f2-cb30b8dd2f6a@linux.dev/

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
 .../bpf/prog_tests/net_timestamping.c         | 231 +++++++++++++++++
 .../selftests/bpf/progs/net_timestamping.c    | 244 ++++++++++++++++++
 2 files changed, 475 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/net_timestamping.c
 create mode 100644 tools/testing/selftests/bpf/progs/net_timestamping.c

diff --git a/tools/testing/selftests/bpf/prog_tests/net_timestamping.c b/tools/testing/selftests/bpf/prog_tests/net_timestamping.c
new file mode 100644
index 000000000000..dcdc40473a7d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/net_timestamping.c
@@ -0,0 +1,231 @@
+#include <linux/net_tstamp.h>
+#include <sys/time.h>
+#include <linux/errqueue.h>
+#include "test_progs.h"
+#include "network_helpers.h"
+#include "net_timestamping.skel.h"
+
+#define CG_NAME "/net-timestamping-test"
+#define NSEC_PER_SEC    1000000000LL
+
+static const char addr4_str[] = "127.0.0.1";
+static const char addr6_str[] = "::1";
+static struct net_timestamping *skel;
+static int cfg_payload_len = 30;
+static struct timespec usr_ts;
+static u64 delay_tolerance_nsec = 10000000000; /* 10 seconds */
+int SK_TS_SCHED;
+int SK_TS_TXSW;
+int SK_TS_ACK;
+
+static int64_t timespec_to_ns64(struct timespec *ts)
+{
+	return ts->tv_sec * NSEC_PER_SEC + ts->tv_nsec;
+}
+
+static void validate_key(int tskey, int tstype)
+{
+	static int expected_tskey = -1;
+
+	if (tstype == SCM_TSTAMP_SCHED)
+		expected_tskey = cfg_payload_len - 1;
+
+	ASSERT_EQ(expected_tskey, tskey, "tskey mismatch");
+
+	expected_tskey = tskey;
+}
+
+static void validate_timestamp(struct timespec *cur, struct timespec *prev)
+{
+	int64_t cur_ns, prev_ns;
+
+	cur_ns = timespec_to_ns64(cur);
+	prev_ns = timespec_to_ns64(prev);
+
+	ASSERT_TRUE((cur_ns - prev_ns) < delay_tolerance_nsec, "latency");
+}
+
+static void test_socket_timestamp(struct scm_timestamping *tss, int tstype,
+				  int tskey)
+{
+	static struct timespec *prev_ts = &usr_ts;
+
+	validate_key(tskey, tstype);
+
+	switch (tstype) {
+	case SCM_TSTAMP_SCHED:
+		validate_timestamp(&tss->ts[0], prev_ts);
+		SK_TS_SCHED = 1;
+		SK_TS_TXSW = SK_TS_ACK = 0;
+		break;
+	case SCM_TSTAMP_SND:
+		validate_timestamp(&tss->ts[0], prev_ts);
+		SK_TS_TXSW = 1;
+		break;
+	case SCM_TSTAMP_ACK:
+		validate_timestamp(&tss->ts[0], prev_ts);
+		SK_TS_ACK = 1;
+		break;
+	}
+
+	prev_ts = &tss->ts[0];
+}
+
+static void test_recv_errmsg_cmsg(struct msghdr *msg)
+{
+	struct sock_extended_err *serr = NULL;
+	struct scm_timestamping *tss = NULL;
+	struct cmsghdr *cm;
+
+	for (cm = CMSG_FIRSTHDR(msg);
+	     cm && cm->cmsg_len;
+	     cm = CMSG_NXTHDR(msg, cm)) {
+		if (cm->cmsg_level == SOL_SOCKET &&
+		    cm->cmsg_type == SCM_TIMESTAMPING) {
+			tss = (void *) CMSG_DATA(cm);
+		} else if ((cm->cmsg_level == SOL_IP &&
+			    cm->cmsg_type == IP_RECVERR) ||
+			   (cm->cmsg_level == SOL_IPV6 &&
+			    cm->cmsg_type == IPV6_RECVERR) ||
+			   (cm->cmsg_level == SOL_PACKET &&
+			    cm->cmsg_type == PACKET_TX_TIMESTAMP)) {
+			serr = (void *) CMSG_DATA(cm);
+			ASSERT_EQ(serr->ee_origin, SO_EE_ORIGIN_TIMESTAMPING,
+				    "cmsg type");
+		}
+
+		if (serr && tss)
+			test_socket_timestamp(tss, serr->ee_info,
+					      serr->ee_data);
+	}
+}
+
+static bool socket_recv_errmsg(int fd)
+{
+	static char ctrl[1024 /* overprovision*/];
+	char data[cfg_payload_len];
+	static struct msghdr msg;
+	struct iovec entry;
+	int n = 0;
+
+	memset(&msg, 0, sizeof(msg));
+	memset(&entry, 0, sizeof(entry));
+	memset(ctrl, 0, sizeof(ctrl));
+
+	entry.iov_base = data;
+	entry.iov_len = cfg_payload_len;
+	msg.msg_iov = &entry;
+	msg.msg_iovlen = 1;
+	msg.msg_name = NULL;
+	msg.msg_namelen = 0;
+	msg.msg_control = ctrl;
+	msg.msg_controllen = sizeof(ctrl);
+
+	n = recvmsg(fd, &msg, MSG_ERRQUEUE);
+	if (n == -1)
+		ASSERT_EQ(errno, EAGAIN, "recvmsg MSG_ERRQUEUE");
+
+	if (n >= 0)
+		test_recv_errmsg_cmsg(&msg);
+
+	return n == -1;
+
+}
+
+static void test_socket_timestamping(int fd)
+{
+	while (!socket_recv_errmsg(fd));
+
+	ASSERT_EQ(SK_TS_SCHED, 1, "SCM_TSTAMP_SCHED");
+	ASSERT_EQ(SK_TS_TXSW, 1, "SCM_TSTAMP_SND");
+	ASSERT_EQ(SK_TS_ACK, 1, "SCM_TSTAMP_ACK");
+}
+
+static void test_tcp(int family)
+{
+	struct net_timestamping__bss *bss = skel->bss;
+	char buf[cfg_payload_len];
+	int sfd = -1, cfd = -1;
+	unsigned int sock_opt;
+	int ret;
+
+	memset(bss, 0, sizeof(*bss));
+
+	sfd = start_server(family, SOCK_STREAM,
+			   family == AF_INET6 ? addr6_str : addr4_str, 0, 0);
+	if (!ASSERT_OK_FD(sfd, "start_server"))
+		goto out;
+
+	cfd = connect_to_fd(sfd, 0);
+	if (!ASSERT_OK_FD(cfd, "connect_to_fd_server"))
+		goto out;
+
+	sock_opt = SOF_TIMESTAMPING_SOFTWARE |
+		   SOF_TIMESTAMPING_OPT_ID |
+		   SOF_TIMESTAMPING_TX_SCHED |
+		   SOF_TIMESTAMPING_TX_SOFTWARE |
+		   SOF_TIMESTAMPING_TX_ACK;
+	ret = setsockopt(cfd, SOL_SOCKET, SO_TIMESTAMPING,
+			 (char *) &sock_opt, sizeof(sock_opt));
+	if (!ASSERT_OK(ret, "setsockopt SO_TIMESTAMPING"))
+		goto out;
+
+	ret = clock_gettime(CLOCK_REALTIME, &usr_ts);
+	if (!ASSERT_OK(ret, "get user time"))
+		goto out;
+
+	ret = write(cfd, buf, sizeof(buf));
+	if (!ASSERT_EQ(ret, sizeof(buf), "send to server"))
+		goto out;
+
+	/* Test if socket timestamping works correctly even with bpf
+	 * extension enabled.
+	 */
+	test_socket_timestamping(cfd);
+
+	ASSERT_EQ(bss->nr_active, 1, "nr_active");
+	ASSERT_EQ(bss->nr_snd, 2, "nr_snd");
+	ASSERT_EQ(bss->nr_sched, 1, "nr_sched");
+	ASSERT_EQ(bss->nr_txsw, 1, "nr_txsw");
+	ASSERT_EQ(bss->nr_ack, 1, "nr_ack");
+
+out:
+	if (sfd >= 0)
+		close(sfd);
+	if (cfd >= 0)
+		close(cfd);
+}
+
+void test_net_timestamping(void)
+{
+	struct netns_obj *ns;
+	int cg_fd;
+
+	cg_fd = test__join_cgroup(CG_NAME);
+	if (!ASSERT_OK_FD(cg_fd, "join cgroup"))
+		return;
+
+	ns = netns_new("net_timestamping_ns", true);
+	if (!ASSERT_OK_PTR(ns, "create ns"))
+		goto done;
+
+	skel = net_timestamping__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "open and load skel"))
+		goto done;
+
+	if (!ASSERT_OK(net_timestamping__attach(skel), "attach skel"))
+		goto done;
+
+	skel->links.skops_sockopt =
+		bpf_program__attach_cgroup(skel->progs.skops_sockopt, cg_fd);
+	if (!ASSERT_OK_PTR(skel->links.skops_sockopt, "attach cgroup"))
+		goto done;
+
+	test_tcp(AF_INET6);
+	test_tcp(AF_INET);
+
+done:
+	net_timestamping__destroy(skel);
+	netns_free(ns);
+	close(cg_fd);
+}
diff --git a/tools/testing/selftests/bpf/progs/net_timestamping.c b/tools/testing/selftests/bpf/progs/net_timestamping.c
new file mode 100644
index 000000000000..d3e1da599626
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/net_timestamping.c
@@ -0,0 +1,244 @@
+#include "vmlinux.h"
+#include "bpf_tracing_net.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_kfuncs.h"
+#include <errno.h>
+
+#define SK_BPF_CB_FLAGS 1009
+#define SK_BPF_CB_TX_TIMESTAMPING 1
+
+int nr_active;
+int nr_snd;
+int nr_passive;
+int nr_sched;
+int nr_txsw;
+int nr_ack;
+
+struct sk_stg {
+	__u64 sendmsg_ns;	/* record ts when sendmsg is called */
+};
+
+struct sk_tskey {
+	u64 cookie;
+	u32 tskey;
+};
+
+struct delay_info {
+	u64 sendmsg_ns;		/* record ts when sendmsg is called */
+	u32 sched_delay;	/* SCHED_OPT_CB - sendmsg_ns */
+	u32 sw_snd_delay;	/* SW_OPT_CB - SCHED_OPT_CB */
+	u32 ack_delay;		/* ACK_OPT_CB - SW_OPT_CB */
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SK_STORAGE);
+	__uint(map_flags, BPF_F_NO_PREALLOC);
+	__type(key, int);
+	__type(value, struct sk_stg);
+} sk_stg_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__type(key, struct sk_tskey);
+	__type(value, struct delay_info);
+	__uint(max_entries, 1024);
+} time_map SEC(".maps");
+
+static u64 delay_tolerance_nsec = 10000000000; /* 10 second as an example */
+
+extern int bpf_sock_ops_enable_tx_tstamp(struct bpf_sock_ops_kern *skops, u64 flags) __ksym;
+
+static int bpf_test_sockopt(void *ctx, const struct sock *sk, int expected)
+{
+	int tmp, new = SK_BPF_CB_TX_TIMESTAMPING;
+	int opt = SK_BPF_CB_FLAGS;
+	int level = SOL_SOCKET;
+
+	if (bpf_setsockopt(ctx, level, opt, &new, sizeof(new)) != expected)
+		return 1;
+
+	if (bpf_getsockopt(ctx, level, opt, &tmp, sizeof(tmp)) != expected ||
+	    (!expected && tmp != new))
+		return 1;
+
+	return 0;
+}
+
+static bool bpf_test_access_sockopt(void *ctx, const struct sock *sk)
+{
+	if (bpf_test_sockopt(ctx, sk, -EOPNOTSUPP))
+		return true;
+	return false;
+}
+
+static bool bpf_test_access_load_hdr_opt(struct bpf_sock_ops *skops)
+{
+	u8 opt[3] = {0};
+	int load_flags = 0;
+	int ret;
+
+	ret = bpf_load_hdr_opt(skops, opt, sizeof(opt), load_flags);
+	if (ret != -EOPNOTSUPP)
+		return true;
+
+	return false;
+}
+
+static bool bpf_test_access_cb_flags_set(struct bpf_sock_ops *skops)
+{
+	int ret;
+
+	ret = bpf_sock_ops_cb_flags_set(skops, 0);
+	if (ret != -EOPNOTSUPP)
+		return true;
+
+	return false;
+}
+
+/* In the timestamping callbacks, we're not allowed to call the following
+ * BPF CALLs for the safety concern. Return false if expected.
+ */
+static bool bpf_test_access_bpf_calls(struct bpf_sock_ops *skops,
+				     const struct sock *sk)
+{
+	if (bpf_test_access_sockopt(skops, sk))
+		return true;
+
+	if (bpf_test_access_load_hdr_opt(skops))
+		return true;
+
+	if (bpf_test_access_cb_flags_set(skops))
+		return true;
+
+	return false;
+}
+
+static bool bpf_test_delay(struct bpf_sock_ops *skops, const struct sock *sk)
+{
+	struct bpf_sock_ops_kern *skops_kern;
+	u64 timestamp = bpf_ktime_get_ns();
+	struct skb_shared_info *shinfo;
+	struct delay_info dinfo = {0};
+	struct sk_tskey key = {0};
+	struct delay_info *val;
+	struct sk_buff *skb;
+	struct sk_stg *stg;
+	u64 prior_ts, delay;
+
+	if (bpf_test_access_bpf_calls(skops, sk))
+		return false;
+
+	skops_kern = bpf_cast_to_kern_ctx(skops);
+	skb = skops_kern->skb;
+	shinfo = bpf_core_cast(skb->head + skb->end, struct skb_shared_info);
+
+	key.cookie = bpf_get_socket_cookie(skops);
+	if (!key.cookie)
+		return false;
+
+	if (skops->op == BPF_SOCK_OPS_TS_SND_CB) {
+		stg = bpf_sk_storage_get(&sk_stg_map, (void *)sk, 0, 0);
+		if (!stg)
+			return false;
+		dinfo.sendmsg_ns = stg->sendmsg_ns;
+		bpf_sock_ops_enable_tx_tstamp(skops_kern, 0);
+		key.tskey = shinfo->tskey;
+		if (!key.tskey)
+			return false;
+		bpf_map_update_elem(&time_map, &key, &dinfo, BPF_ANY);
+		return true;
+	}
+
+	key.tskey = shinfo->tskey;
+	if (!key.tskey)
+		return false;
+
+	val = bpf_map_lookup_elem(&time_map, &key);
+	if (!val)
+		return false;
+
+	switch (skops->op) {
+	case BPF_SOCK_OPS_TS_SCHED_OPT_CB:
+		delay = val->sched_delay = timestamp - val->sendmsg_ns;
+		break;
+	case BPF_SOCK_OPS_TS_SW_OPT_CB:
+		prior_ts = val->sched_delay + val->sendmsg_ns;
+		delay = val->sw_snd_delay = timestamp - prior_ts;
+		break;
+	case BPF_SOCK_OPS_TS_ACK_OPT_CB:
+		prior_ts = val->sw_snd_delay + val->sched_delay + val->sendmsg_ns;
+		delay = val->ack_delay = timestamp - prior_ts;
+		break;
+	}
+
+	if (delay >= delay_tolerance_nsec)
+		return false;
+
+	/* Since it's the last one, remove from the map after latency check */
+	if (skops->op == BPF_SOCK_OPS_TS_ACK_OPT_CB)
+		bpf_map_delete_elem(&time_map, &key);
+
+	return true;
+}
+
+SEC("fentry/tcp_sendmsg_locked")
+int BPF_PROG(trace_tcp_sendmsg_locked, struct sock *sk, struct msghdr *msg, size_t size)
+{
+	u64 timestamp = bpf_ktime_get_ns();
+	u32 flag = sk->sk_bpf_cb_flags;
+	struct sk_stg *stg;
+
+	if (!flag)
+		return 0;
+
+	stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
+				 BPF_SK_STORAGE_GET_F_CREATE);
+	if (!stg)
+		return 0;
+
+	stg->sendmsg_ns = timestamp;
+	nr_snd += 1;
+	return 0;
+}
+
+SEC("sockops")
+int skops_sockopt(struct bpf_sock_ops *skops)
+{
+	struct bpf_sock *bpf_sk = skops->sk;
+	const struct sock *sk;
+
+	if (!bpf_sk)
+		return 1;
+
+	sk = (struct sock *)bpf_skc_to_tcp_sock(bpf_sk);
+	if (!sk)
+		return 1;
+
+	switch (skops->op) {
+	case BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB:
+		nr_active += !bpf_test_sockopt(skops, sk, 0);
+		break;
+	case BPF_SOCK_OPS_TS_SND_CB:
+		if (bpf_test_delay(skops, sk))
+			nr_snd += 1;
+		break;
+	case BPF_SOCK_OPS_TS_SCHED_OPT_CB:
+		if (bpf_test_delay(skops, sk))
+			nr_sched += 1;
+		break;
+	case BPF_SOCK_OPS_TS_SW_OPT_CB:
+		if (bpf_test_delay(skops, sk))
+			nr_txsw += 1;
+		break;
+	case BPF_SOCK_OPS_TS_ACK_OPT_CB:
+		if (bpf_test_delay(skops, sk))
+			nr_ack += 1;
+		break;
+	}
+
+	return 1;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback
  2025-02-12  6:18 ` [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback Jason Xing
@ 2025-02-12 15:26   ` Willem de Bruijn
  2025-02-13  0:07     ` Jason Xing
  0 siblings, 1 reply; 25+ messages in thread
From: Willem de Bruijn @ 2025-02-12 15:26 UTC (permalink / raw)
  To: Jason Xing, davem, edumazet, kuba, pabeni, dsahern,
	willemdebruijn.kernel, willemb, ast, daniel, andrii, martin.lau,
	eddyz87, song, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, jolsa, shuah, ykolal
  Cc: bpf, netdev, Jason Xing

Jason Xing wrote:
> Support the ACK case for bpf timestamping.
> 
> Add a new sock_ops callback, BPF_SOCK_OPS_TS_ACK_OPT_CB. This
> callback will occur at the same timestamping point as the user
> space's SCM_TSTAMP_ACK. The BPF program can use it to get the
> same SCM_TSTAMP_ACK timestamp without modifying the user-space
> application.
> 
> This patch extends txstamp_ack to two bits: 1 stands for
> SO_TIMESTAMPING mode, 2 bpf extension.
> 
> Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
>  include/net/tcp.h              | 6 ++++--
>  include/uapi/linux/bpf.h       | 5 +++++
>  net/core/skbuff.c              | 5 ++++-
>  net/dsa/user.c                 | 2 +-
>  net/ipv4/tcp.c                 | 2 +-
>  net/socket.c                   | 2 +-
>  tools/include/uapi/linux/bpf.h | 5 +++++
>  7 files changed, 21 insertions(+), 6 deletions(-)

> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index 0d704bda6c41..aa080f7ccea4 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -488,7 +488,7 @@ static void tcp_tx_timestamp(struct sock *sk, struct sockcm_cookie *sockc)
>  
>  		sock_tx_timestamp(sk, sockc, &shinfo->tx_flags);
>  		if (tsflags & SOF_TIMESTAMPING_TX_ACK)
> -			tcb->txstamp_ack = 1;
> +			tcb->txstamp_ack = TSTAMP_ACK_SK;

Similar to the BPF code, should this by |= TSTAMP_ACK_SK?

Does not matter in practice if the BPF setter can never precede this.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback
  2025-02-12  6:18 ` [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback Jason Xing
@ 2025-02-12 23:18   ` Martin KaFai Lau
  2025-02-13  7:24     ` Jason Xing
  0 siblings, 1 reply; 25+ messages in thread
From: Martin KaFai Lau @ 2025-02-12 23:18 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On 2/11/25 10:18 PM, Jason Xing wrote:
> Support sw SCM_TSTAMP_SND case for bpf timestamping.
> 
> Add a new sock_ops callback, BPF_SOCK_OPS_TS_SW_OPT_CB. This
> callback will occur at the same timestamping point as the user
> space's software SCM_TSTAMP_SND. The BPF program can use it to
> get the same SCM_TSTAMP_SND timestamp without modifying the
> user-space application.
> 
> Based on this patch, BPF program will get the software
> timestamp when the driver is ready to send the skb. In the
> sebsequent patch, the hardware timestamp will be supported.
> 
> Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
>   include/linux/skbuff.h         | 2 +-
>   include/uapi/linux/bpf.h       | 4 ++++
>   net/core/skbuff.c              | 9 ++++++++-
>   tools/include/uapi/linux/bpf.h | 4 ++++
>   4 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 52f6e033e704..76582500c5ea 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -4568,7 +4568,7 @@ void skb_tstamp_tx(struct sk_buff *orig_skb,
>   static inline void skb_tx_timestamp(struct sk_buff *skb)
>   {
>   	skb_clone_tx_timestamp(skb);
> -	if (skb_shinfo(skb)->tx_flags & SKBTX_SW_TSTAMP)
> +	if (skb_shinfo(skb)->tx_flags & (SKBTX_SW_TSTAMP | SKBTX_BPF))
>   		skb_tstamp_tx(skb, NULL);
>   }
>   
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 68664ececdc0..b3bd92281084 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -7039,6 +7039,10 @@ enum {
>   					 * dev layer when SK_BPF_CB_TX_TIMESTAMPING
>   					 * feature is on.
>   					 */
> +	BPF_SOCK_OPS_TS_SW_OPT_CB,	/* Called when skb is about to send
> +					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
> +					 * feature is on.
> +					 */
>   };
>   
>   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 7bac5e950e3d..d80d2137692f 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -5557,6 +5557,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
>   }
>   
>   static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
> +						  struct skb_shared_hwtstamps *hwts,

s/hwts/hwtstamps/
Use the same argument name as all other functions in this file. Its caller is 
using hwtstamps as the argument name also. Easier to follow.

Probably the same for the skb_tstamp_tx_report_so_timestamping().

>   						  struct sock *sk,
>   						  int tstype)
>   {
> @@ -5566,6 +5567,11 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
>   	case SCM_TSTAMP_SCHED:
>   		op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
>   		break;
> +	case SCM_TSTAMP_SND:
> +		if (hwts)
> +			return;
> +		op = BPF_SOCK_OPS_TS_SW_OPT_CB;
> +		break;
>   	default:
>   		return;
>   	}
> @@ -5586,7 +5592,8 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
>   		return;
>   
>   	if (skb_shinfo(orig_skb)->tx_flags & SKBTX_BPF)
> -		skb_tstamp_tx_report_bpf_timestamping(orig_skb, sk, tstype);
> +		skb_tstamp_tx_report_bpf_timestamping(orig_skb, hwtstamps,
> +						      sk, tstype);
>   
>   	if (!skb_tstamp_tx_report_so_timestamping(orig_skb, hwtstamps, tstype))
>   		return;
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index eed91b7296b7..9bd1c7c77b17 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -7029,6 +7029,10 @@ enum {
>   					 * dev layer when SK_BPF_CB_TX_TIMESTAMPING
>   					 * feature is on.
>   					 */
> +	BPF_SOCK_OPS_TS_SW_OPT_CB,	/* Called when skb is about to send
> +					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
> +					 * feature is on.
> +					 */
>   };
>   
>   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback
  2025-02-12  6:18 ` [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback Jason Xing
@ 2025-02-12 23:20   ` Martin KaFai Lau
  2025-02-13  7:24     ` Jason Xing
  0 siblings, 1 reply; 25+ messages in thread
From: Martin KaFai Lau @ 2025-02-12 23:20 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On 2/11/25 10:18 PM, Jason Xing wrote:
> Support hw SCM_TSTAMP_SND case for bpf timestamping.
> 
> Add a new sock_ops callback, BPF_SOCK_OPS_TS_HW_OPT_CB. This
> callback will occur at the same timestamping point as the user
> space's hardware SCM_TSTAMP_SND. The BPF program can use it to
> get the same SCM_TSTAMP_SND timestamp without modifying the
> user-space application.
> 
> To avoid increase the code complexity, replace SKBTX_HW_TSTAMP
> with SKBTX_HW_TSTAMP_NOBPF instead of changing numerous callers
> from driver side using SKBTX_HW_TSTAMP. The new definition of
> SKBTX_HW_TSTAMP means the combination tests of socket timestamping
> and bpf timestamping. After this patch, drivers can work under the
> bpf timestamping.
> 
> Considering some drivers doesn't assign the skb with hardware
> timestamp, this patch do the assignment and then BPF program
> can acquire the hwstamp from skb directly.
> 
> Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
>   include/linux/skbuff.h         | 4 +++-
>   include/uapi/linux/bpf.h       | 4 ++++
>   net/core/skbuff.c              | 6 +++---
>   tools/include/uapi/linux/bpf.h | 4 ++++
>   4 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 76582500c5ea..0b4f1889500d 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -470,7 +470,7 @@ struct skb_shared_hwtstamps {
>   /* Definitions for tx_flags in struct skb_shared_info */
>   enum {
>   	/* generate hardware time stamp */
> -	SKBTX_HW_TSTAMP = 1 << 0,
> +	SKBTX_HW_TSTAMP_NOBPF = 1 << 0,
>   
>   	/* generate software time stamp when queueing packet to NIC */
>   	SKBTX_SW_TSTAMP = 1 << 1,
> @@ -494,6 +494,8 @@ enum {
>   	SKBTX_BPF = 1 << 7,
>   };
>   
> +#define SKBTX_HW_TSTAMP		(SKBTX_HW_TSTAMP_NOBPF | SKBTX_BPF)
> +
>   #define SKBTX_ANY_SW_TSTAMP	(SKBTX_SW_TSTAMP    | \
>   				 SKBTX_SCHED_TSTAMP | \
>   				 SKBTX_BPF)
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index b3bd92281084..f70edd067edf 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -7043,6 +7043,10 @@ enum {
>   					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
>   					 * feature is on.
>   					 */
> +	BPF_SOCK_OPS_TS_HW_OPT_CB,	/* Called in hardware phase when
> +					 * SK_BPF_CB_TX_TIMESTAMPING feature
> +					 * is on.
> +					 */
>   };
>   
>   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index d80d2137692f..4930c43ee77b 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -5547,7 +5547,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
>   	case SCM_TSTAMP_SCHED:
>   		return skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP;
>   	case SCM_TSTAMP_SND:
> -		return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP :
> +		return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP_NOBPF :
>   						    SKBTX_SW_TSTAMP);
>   	case SCM_TSTAMP_ACK:
>   		return TCP_SKB_CB(skb)->txstamp_ack;
> @@ -5568,9 +5568,9 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
>   		op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
>   		break;
>   	case SCM_TSTAMP_SND:
> +		op = hwts ? BPF_SOCK_OPS_TS_HW_OPT_CB : BPF_SOCK_OPS_TS_SW_OPT_CB;

Remove this "hwts" test.

>   		if (hwts)

Reuse this and do everything in this "if else" statement.

> -			return;
> -		op = BPF_SOCK_OPS_TS_SW_OPT_CB;
> +			*skb_hwtstamps(skb) = *hwts;
>   		break;
>   	default:
>   		return;
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 9bd1c7c77b17..7b9652ce7e3c 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -7033,6 +7033,10 @@ enum {
>   					 * to the nic when SK_BPF_CB_TX_TIMESTAMPING
>   					 * feature is on.
>   					 */
> +	BPF_SOCK_OPS_TS_HW_OPT_CB,	/* Called in hardware phase when
> +					 * SK_BPF_CB_TX_TIMESTAMPING feature
> +					 * is on.
> +					 */
>   };
>   
>   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping
  2025-02-12  6:18 ` [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping Jason Xing
@ 2025-02-12 23:49   ` Martin KaFai Lau
  2025-02-13  7:26     ` Jason Xing
  0 siblings, 1 reply; 25+ messages in thread
From: Martin KaFai Lau @ 2025-02-12 23:49 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On 2/11/25 10:18 PM, Jason Xing wrote:
> Add the bpf_sock_ops_enable_tx_tstamp kfunc to allow BPF programs to
> selectively enable TX timestamping on a skb during tcp_sendmsg().
> 
> For example, BPF program will limit tracking X numbers of packets
> and then will stop there instead of tracing all the sendmsgs of
> matched flow all along. It would be helpful for users who cannot
> afford to calculate latencies from every sendmsg call probably
> due to the performance or storage space consideration.
> 
> Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
>   kernel/bpf/btf.c  |  1 +
>   net/core/filter.c | 32 +++++++++++++++++++++++++++++++-
>   2 files changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 9433b6467bbe..740210f883dc 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -8522,6 +8522,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
>   	case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
>   	case BPF_PROG_TYPE_CGROUP_SOCKOPT:
>   	case BPF_PROG_TYPE_CGROUP_SYSCTL:
> +	case BPF_PROG_TYPE_SOCK_OPS:
>   		return BTF_KFUNC_HOOK_CGROUP;
>   	case BPF_PROG_TYPE_SCHED_ACT:
>   		return BTF_KFUNC_HOOK_SCHED_ACT;
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 7f56d0bbeb00..36793c68b125 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -12102,6 +12102,26 @@ __bpf_kfunc int bpf_sk_assign_tcp_reqsk(struct __sk_buff *s, struct sock *sk,
>   #endif
>   }
>   
> +__bpf_kfunc int bpf_sock_ops_enable_tx_tstamp(struct bpf_sock_ops_kern *skops,
> +					      u64 flags)
> +{
> +	struct sk_buff *skb;
> +	struct sock *sk;
> +
> +	if (skops->op != BPF_SOCK_OPS_TS_SND_CB)
> +		return -EOPNOTSUPP;

It still needs to test the "flags" such that it can be used in the future....

	if (flags)
		return -EINVAL;

> +
> +	skb = skops->skb;
> +	sk = skops->sk;
> +	skb_shinfo(skb)->tx_flags |= SKBTX_BPF;
> +	if (sk_is_tcp(sk)) {

Unnecessary check like this will only confuse reader. Remove it and revisit when 
UDP will be supported.

> +		TCP_SKB_CB(skb)->txstamp_ack |= TSTAMP_ACK_BPF;
> +		skb_shinfo(skb)->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
> +	}
> +
> +	return 0;
> +}
> +
>   __bpf_kfunc_end_defs();
>   
>   int bpf_dynptr_from_skb_rdonly(struct __sk_buff *skb, u64 flags,
> @@ -12135,6 +12155,10 @@ BTF_KFUNCS_START(bpf_kfunc_check_set_tcp_reqsk)
>   BTF_ID_FLAGS(func, bpf_sk_assign_tcp_reqsk, KF_TRUSTED_ARGS)
>   BTF_KFUNCS_END(bpf_kfunc_check_set_tcp_reqsk)
>   
> +BTF_KFUNCS_START(bpf_kfunc_check_set_sock_ops)
> +BTF_ID_FLAGS(func, bpf_sock_ops_enable_tx_tstamp, KF_TRUSTED_ARGS)
> +BTF_KFUNCS_END(bpf_kfunc_check_set_sock_ops)
> +
>   static const struct btf_kfunc_id_set bpf_kfunc_set_skb = {
>   	.owner = THIS_MODULE,
>   	.set = &bpf_kfunc_check_set_skb,
> @@ -12155,6 +12179,11 @@ static const struct btf_kfunc_id_set bpf_kfunc_set_tcp_reqsk = {
>   	.set = &bpf_kfunc_check_set_tcp_reqsk,
>   };
>   
> +static const struct btf_kfunc_id_set bpf_kfunc_set_sock_ops = {
> +	.owner = THIS_MODULE,
> +	.set = &bpf_kfunc_check_set_sock_ops,
> +};
> +
>   static int __init bpf_kfunc_init(void)
>   {
>   	int ret;
> @@ -12173,7 +12202,8 @@ static int __init bpf_kfunc_init(void)
>   	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
>   	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
>   					       &bpf_kfunc_set_sock_addr);
> -	return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_tcp_reqsk);
> +	ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_tcp_reqsk);
> +	return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SOCK_OPS, &bpf_kfunc_set_sock_ops);
>   }
>   late_initcall(bpf_kfunc_init);
>   


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback
  2025-02-12 15:26   ` Willem de Bruijn
@ 2025-02-13  0:07     ` Jason Xing
  2025-02-13  7:23       ` Jason Xing
  0 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-13  0:07 UTC (permalink / raw)
  To: Willem de Bruijn
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemb, ast, daniel,
	andrii, martin.lau, eddyz87, song, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf, netdev

On Wed, Feb 12, 2025 at 11:26 PM Willem de Bruijn
<willemdebruijn.kernel@gmail.com> wrote:
>
> Jason Xing wrote:
> > Support the ACK case for bpf timestamping.
> >
> > Add a new sock_ops callback, BPF_SOCK_OPS_TS_ACK_OPT_CB. This
> > callback will occur at the same timestamping point as the user
> > space's SCM_TSTAMP_ACK. The BPF program can use it to get the
> > same SCM_TSTAMP_ACK timestamp without modifying the user-space
> > application.
> >
> > This patch extends txstamp_ack to two bits: 1 stands for
> > SO_TIMESTAMPING mode, 2 bpf extension.
> >
> > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > ---
> >  include/net/tcp.h              | 6 ++++--
> >  include/uapi/linux/bpf.h       | 5 +++++
> >  net/core/skbuff.c              | 5 ++++-
> >  net/dsa/user.c                 | 2 +-
> >  net/ipv4/tcp.c                 | 2 +-
> >  net/socket.c                   | 2 +-
> >  tools/include/uapi/linux/bpf.h | 5 +++++
> >  7 files changed, 21 insertions(+), 6 deletions(-)
>
> > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> > index 0d704bda6c41..aa080f7ccea4 100644
> > --- a/net/ipv4/tcp.c
> > +++ b/net/ipv4/tcp.c
> > @@ -488,7 +488,7 @@ static void tcp_tx_timestamp(struct sock *sk, struct sockcm_cookie *sockc)
> >
> >               sock_tx_timestamp(sk, sockc, &shinfo->tx_flags);
> >               if (tsflags & SOF_TIMESTAMPING_TX_ACK)
> > -                     tcb->txstamp_ack = 1;
> > +                     tcb->txstamp_ack = TSTAMP_ACK_SK;
>
> Similar to the BPF code, should this by |= TSTAMP_ACK_SK?
>
> Does not matter in practice if the BPF setter can never precede this.

I gave the same thought on this too. We've already fixed the position
and order (of using socket timestamping and bpf timestamping).

I have no strong preference. If you insist, I can surely adjust it.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature
  2025-02-12  6:18 ` [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature Jason Xing
@ 2025-02-13  1:08   ` Martin KaFai Lau
  2025-02-13 11:31     ` Jason Xing
  0 siblings, 1 reply; 25+ messages in thread
From: Martin KaFai Lau @ 2025-02-13  1:08 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On 2/11/25 10:18 PM, Jason Xing wrote:
> BPF program calculates a couple of latency deltas between each tx
> timestamping callbacks. It can be used in the real world to diagnose
> the kernel behaviour in the tx path.
> 
> Check the safety issues by accessing a few bpf calls in
> bpf_test_access_bpf_calls() which are implemented in the patch 3 and 4.
> 
> Check if the bpf timestamping can co-exist with socket timestamping.
> 
> There remains a few realistic things[1][2] to highlight:
> 1. in general a packet may pass through multiple qdiscs. For instance
> with bonding or tunnel virtual devices in the egress path.
> 2. packets may be resent, in which case an ACK might precede a repeat
> SCHED and SND.
> 3. erroneous or malicious peers may also just never send an ACK.
> 
> [1]: https://lore.kernel.org/all/67a389af981b0_14e0832949d@willemb.c.googlers.com.notmuch/
> [2]: https://lore.kernel.org/all/c329a0c1-239b-4ca1-91f2-cb30b8dd2f6a@linux.dev/
> 
> Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
>   .../bpf/prog_tests/net_timestamping.c         | 231 +++++++++++++++++
>   .../selftests/bpf/progs/net_timestamping.c    | 244 ++++++++++++++++++
>   2 files changed, 475 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/prog_tests/net_timestamping.c
>   create mode 100644 tools/testing/selftests/bpf/progs/net_timestamping.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/net_timestamping.c b/tools/testing/selftests/bpf/prog_tests/net_timestamping.c
> new file mode 100644
> index 000000000000..dcdc40473a7d
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/net_timestamping.c
> @@ -0,0 +1,231 @@
> +#include <linux/net_tstamp.h>
> +#include <sys/time.h>
> +#include <linux/errqueue.h>
> +#include "test_progs.h"
> +#include "network_helpers.h"
> +#include "net_timestamping.skel.h"
> +
> +#define CG_NAME "/net-timestamping-test"
> +#define NSEC_PER_SEC    1000000000LL
> +
> +static const char addr4_str[] = "127.0.0.1";
> +static const char addr6_str[] = "::1";
> +static struct net_timestamping *skel;
> +static int cfg_payload_len = 30;

const ?

> +static struct timespec usr_ts;
> +static u64 delay_tolerance_nsec = 10000000000; /* 10 seconds */
> +int SK_TS_SCHED;
> +int SK_TS_TXSW;
> +int SK_TS_ACK;
> +
> +static int64_t timespec_to_ns64(struct timespec *ts)
> +{
> +	return ts->tv_sec * NSEC_PER_SEC + ts->tv_nsec;
> +}
> +
> +static void validate_key(int tskey, int tstype)
> +{
> +	static int expected_tskey = -1;
> +
> +	if (tstype == SCM_TSTAMP_SCHED)
> +		expected_tskey = cfg_payload_len - 1;
> +
> +	ASSERT_EQ(expected_tskey, tskey, "tskey mismatch");
> +
> +	expected_tskey = tskey;
> +}
> +
> +static void validate_timestamp(struct timespec *cur, struct timespec *prev)
> +{
> +	int64_t cur_ns, prev_ns;
> +
> +	cur_ns = timespec_to_ns64(cur);
> +	prev_ns = timespec_to_ns64(prev);
> +
> +	ASSERT_TRUE((cur_ns - prev_ns) < delay_tolerance_nsec, "latency");

ASSERT_LT()

> +}
> +
> +static void test_socket_timestamp(struct scm_timestamping *tss, int tstype,
> +				  int tskey)
> +{
> +	static struct timespec *prev_ts = &usr_ts;
> +
> +	validate_key(tskey, tstype);
> +
> +	switch (tstype) {
> +	case SCM_TSTAMP_SCHED:
> +		validate_timestamp(&tss->ts[0], prev_ts);
> +		SK_TS_SCHED = 1;
> +		SK_TS_TXSW = SK_TS_ACK = 0;
> +		break;
> +	case SCM_TSTAMP_SND:
> +		validate_timestamp(&tss->ts[0], prev_ts);
> +		SK_TS_TXSW = 1;
> +		break;
> +	case SCM_TSTAMP_ACK:
> +		validate_timestamp(&tss->ts[0], prev_ts);
> +		SK_TS_ACK = 1;
> +		break;
> +	}
> +
> +	prev_ts = &tss->ts[0];
> +}
> +
> +static void test_recv_errmsg_cmsg(struct msghdr *msg)
> +{
> +	struct sock_extended_err *serr = NULL;
> +	struct scm_timestamping *tss = NULL;
> +	struct cmsghdr *cm;
> +
> +	for (cm = CMSG_FIRSTHDR(msg);
> +	     cm && cm->cmsg_len;
> +	     cm = CMSG_NXTHDR(msg, cm)) {
> +		if (cm->cmsg_level == SOL_SOCKET &&
> +		    cm->cmsg_type == SCM_TIMESTAMPING) {
> +			tss = (void *) CMSG_DATA(cm);
> +		} else if ((cm->cmsg_level == SOL_IP &&
> +			    cm->cmsg_type == IP_RECVERR) ||
> +			   (cm->cmsg_level == SOL_IPV6 &&
> +			    cm->cmsg_type == IPV6_RECVERR) ||
> +			   (cm->cmsg_level == SOL_PACKET &&
> +			    cm->cmsg_type == PACKET_TX_TIMESTAMP)) {
> +			serr = (void *) CMSG_DATA(cm);
> +			ASSERT_EQ(serr->ee_origin, SO_EE_ORIGIN_TIMESTAMPING,
> +				    "cmsg type");
> +		}
> +
> +		if (serr && tss)
> +			test_socket_timestamp(tss, serr->ee_info,
> +					      serr->ee_data);
> +	}
> +}
> +
> +static bool socket_recv_errmsg(int fd)
> +{
> +	static char ctrl[1024 /* overprovision*/];
> +	char data[cfg_payload_len];
> +	static struct msghdr msg;
> +	struct iovec entry;
> +	int n = 0;
> +
> +	memset(&msg, 0, sizeof(msg));
> +	memset(&entry, 0, sizeof(entry));
> +	memset(ctrl, 0, sizeof(ctrl));
> +
> +	entry.iov_base = data;
> +	entry.iov_len = cfg_payload_len;
> +	msg.msg_iov = &entry;
> +	msg.msg_iovlen = 1;
> +	msg.msg_name = NULL;
> +	msg.msg_namelen = 0;
> +	msg.msg_control = ctrl;
> +	msg.msg_controllen = sizeof(ctrl);
> +
> +	n = recvmsg(fd, &msg, MSG_ERRQUEUE);
> +	if (n == -1)
> +		ASSERT_EQ(errno, EAGAIN, "recvmsg MSG_ERRQUEUE");
> +
> +	if (n >= 0)
> +		test_recv_errmsg_cmsg(&msg);
> +
> +	return n == -1;
> +
> +}
> +
> +static void test_socket_timestamping(int fd)
> +{
> +	while (!socket_recv_errmsg(fd));
> +
> +	ASSERT_EQ(SK_TS_SCHED, 1, "SCM_TSTAMP_SCHED");
> +	ASSERT_EQ(SK_TS_TXSW, 1, "SCM_TSTAMP_SND");
> +	ASSERT_EQ(SK_TS_ACK, 1, "SCM_TSTAMP_ACK");
> +}
> +
> +static void test_tcp(int family)
> +{
> +	struct net_timestamping__bss *bss = skel->bss;
> +	char buf[cfg_payload_len];
> +	int sfd = -1, cfd = -1;
> +	unsigned int sock_opt;
> +	int ret;
> +
> +	memset(bss, 0, sizeof(*bss));

No need to reset some of the new global variables, e.g. SK_TS_SCHED?

> +
> +	sfd = start_server(family, SOCK_STREAM,
> +			   family == AF_INET6 ? addr6_str : addr4_str, 0, 0);
> +	if (!ASSERT_OK_FD(sfd, "start_server"))
> +		goto out;
> +
> +	cfd = connect_to_fd(sfd, 0);
> +	if (!ASSERT_OK_FD(cfd, "connect_to_fd_server"))
> +		goto out;
> +
> +	sock_opt = SOF_TIMESTAMPING_SOFTWARE |
> +		   SOF_TIMESTAMPING_OPT_ID |
> +		   SOF_TIMESTAMPING_TX_SCHED |
> +		   SOF_TIMESTAMPING_TX_SOFTWARE |
> +		   SOF_TIMESTAMPING_TX_ACK;
> +	ret = setsockopt(cfd, SOL_SOCKET, SO_TIMESTAMPING,
> +			 (char *) &sock_opt, sizeof(sock_opt));

It also needs the original test in v9 to check the bpf timestamping works 
without the user space's SO_TIMESTAMPING, which is the major use case of this 
series.

It should be easy to do by conditionally enabling the SO_TIMESTAMPING here.

> +	if (!ASSERT_OK(ret, "setsockopt SO_TIMESTAMPING"))
> +		goto out;
> +
> +	ret = clock_gettime(CLOCK_REALTIME, &usr_ts);
> +	if (!ASSERT_OK(ret, "get user time"))
> +		goto out;
> +
> +	ret = write(cfd, buf, sizeof(buf));
> +	if (!ASSERT_EQ(ret, sizeof(buf), "send to server"))
> +		goto out;
> +
> +	/* Test if socket timestamping works correctly even with bpf
> +	 * extension enabled.
> +	 */
> +	test_socket_timestamping(cfd);
> +
> +	ASSERT_EQ(bss->nr_active, 1, "nr_active");
> +	ASSERT_EQ(bss->nr_snd, 2, "nr_snd");
> +	ASSERT_EQ(bss->nr_sched, 1, "nr_sched");
> +	ASSERT_EQ(bss->nr_txsw, 1, "nr_txsw");
> +	ASSERT_EQ(bss->nr_ack, 1, "nr_ack");
> +
> +out:
> +	if (sfd >= 0)
> +		close(sfd);
> +	if (cfd >= 0)
> +		close(cfd);
> +}
> +
> +void test_net_timestamping(void)
> +{
> +	struct netns_obj *ns;
> +	int cg_fd;
> +
> +	cg_fd = test__join_cgroup(CG_NAME);
> +	if (!ASSERT_OK_FD(cg_fd, "join cgroup"))
> +		return;
> +
> +	ns = netns_new("net_timestamping_ns", true);
> +	if (!ASSERT_OK_PTR(ns, "create ns"))
> +		goto done;
> +
> +	skel = net_timestamping__open_and_load();
> +	if (!ASSERT_OK_PTR(skel, "open and load skel"))
> +		goto done;
> +
> +	if (!ASSERT_OK(net_timestamping__attach(skel), "attach skel"))
> +		goto done;
> +
> +	skel->links.skops_sockopt =
> +		bpf_program__attach_cgroup(skel->progs.skops_sockopt, cg_fd);
> +	if (!ASSERT_OK_PTR(skel->links.skops_sockopt, "attach cgroup"))
> +		goto done;
> +
> +	test_tcp(AF_INET6);
> +	test_tcp(AF_INET);

Considering the w and w/o SO_TIMESTAMPING combinations (i.e. x2), it is worth to 
have proper subtests. It is easy also. Take a look at the test__start_subtest() 
usage `under the prog_tests/.

> +
> +done:
> +	net_timestamping__destroy(skel);
> +	netns_free(ns);
> +	close(cg_fd);
> +}
> diff --git a/tools/testing/selftests/bpf/progs/net_timestamping.c b/tools/testing/selftests/bpf/progs/net_timestamping.c
> new file mode 100644
> index 000000000000..d3e1da599626
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/net_timestamping.c
> @@ -0,0 +1,244 @@
> +#include "vmlinux.h"
> +#include "bpf_tracing_net.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +#include "bpf_misc.h"
> +#include "bpf_kfuncs.h"
> +#include <errno.h>
> +
> +#define SK_BPF_CB_FLAGS 1009
> +#define SK_BPF_CB_TX_TIMESTAMPING 1

Remove these two defines. The vmlinux.h has it.

[ ... ]

> +SEC("fentry/tcp_sendmsg_locked")
> +int BPF_PROG(trace_tcp_sendmsg_locked, struct sock *sk, struct msghdr *msg, size_t size)
> +{
> +	u64 timestamp = bpf_ktime_get_ns();
> +	u32 flag = sk->sk_bpf_cb_flags;
> +	struct sk_stg *stg;
> +
> +	if (!flag)

I just noticed this one.

Lets replace the "flag" check with a better check (e.g. pid check used in other 
tests). Then it won't affect sk of other tests running in parallel.

It is pretty easy. Take a look at how bpf_get_current_pid_tgid() is used in 
progs/local_storage.c.


> +		return 0;
> +
> +	stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
> +				 BPF_SK_STORAGE_GET_F_CREATE);
> +	if (!stg)
> +		return 0;
> +
> +	stg->sendmsg_ns = timestamp;
> +	nr_snd += 1;
> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback
  2025-02-13  0:07     ` Jason Xing
@ 2025-02-13  7:23       ` Jason Xing
  2025-02-13 15:09         ` Willem de Bruijn
  0 siblings, 1 reply; 25+ messages in thread
From: Jason Xing @ 2025-02-13  7:23 UTC (permalink / raw)
  To: Willem de Bruijn
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemb, ast, daniel,
	andrii, martin.lau, eddyz87, song, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf, netdev

On Thu, Feb 13, 2025 at 8:07 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> On Wed, Feb 12, 2025 at 11:26 PM Willem de Bruijn
> <willemdebruijn.kernel@gmail.com> wrote:
> >
> > Jason Xing wrote:
> > > Support the ACK case for bpf timestamping.
> > >
> > > Add a new sock_ops callback, BPF_SOCK_OPS_TS_ACK_OPT_CB. This
> > > callback will occur at the same timestamping point as the user
> > > space's SCM_TSTAMP_ACK. The BPF program can use it to get the
> > > same SCM_TSTAMP_ACK timestamp without modifying the user-space
> > > application.
> > >
> > > This patch extends txstamp_ack to two bits: 1 stands for
> > > SO_TIMESTAMPING mode, 2 bpf extension.
> > >
> > > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > > ---
> > >  include/net/tcp.h              | 6 ++++--
> > >  include/uapi/linux/bpf.h       | 5 +++++
> > >  net/core/skbuff.c              | 5 ++++-
> > >  net/dsa/user.c                 | 2 +-
> > >  net/ipv4/tcp.c                 | 2 +-
> > >  net/socket.c                   | 2 +-
> > >  tools/include/uapi/linux/bpf.h | 5 +++++
> > >  7 files changed, 21 insertions(+), 6 deletions(-)
> >
> > > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> > > index 0d704bda6c41..aa080f7ccea4 100644
> > > --- a/net/ipv4/tcp.c
> > > +++ b/net/ipv4/tcp.c
> > > @@ -488,7 +488,7 @@ static void tcp_tx_timestamp(struct sock *sk, struct sockcm_cookie *sockc)
> > >
> > >               sock_tx_timestamp(sk, sockc, &shinfo->tx_flags);
> > >               if (tsflags & SOF_TIMESTAMPING_TX_ACK)
> > > -                     tcb->txstamp_ack = 1;
> > > +                     tcb->txstamp_ack = TSTAMP_ACK_SK;
> >
> > Similar to the BPF code, should this by |= TSTAMP_ACK_SK?
> >
> > Does not matter in practice if the BPF setter can never precede this.
>
> I gave the same thought on this too. We've already fixed the position
> and order (of using socket timestamping and bpf timestamping).
>
> I have no strong preference. If you insist, I can surely adjust it.

I updated it in the next version locally :)

>
> Thanks,
> Jason

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback
  2025-02-12 23:18   ` Martin KaFai Lau
@ 2025-02-13  7:24     ` Jason Xing
  0 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-13  7:24 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On Thu, Feb 13, 2025 at 7:19 AM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
> On 2/11/25 10:18 PM, Jason Xing wrote:
> > Support sw SCM_TSTAMP_SND case for bpf timestamping.
> >
> > Add a new sock_ops callback, BPF_SOCK_OPS_TS_SW_OPT_CB. This
> > callback will occur at the same timestamping point as the user
> > space's software SCM_TSTAMP_SND. The BPF program can use it to
> > get the same SCM_TSTAMP_SND timestamp without modifying the
> > user-space application.
> >
> > Based on this patch, BPF program will get the software
> > timestamp when the driver is ready to send the skb. In the
> > sebsequent patch, the hardware timestamp will be supported.
> >
> > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > ---
> >   include/linux/skbuff.h         | 2 +-
> >   include/uapi/linux/bpf.h       | 4 ++++
> >   net/core/skbuff.c              | 9 ++++++++-
> >   tools/include/uapi/linux/bpf.h | 4 ++++
> >   4 files changed, 17 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 52f6e033e704..76582500c5ea 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -4568,7 +4568,7 @@ void skb_tstamp_tx(struct sk_buff *orig_skb,
> >   static inline void skb_tx_timestamp(struct sk_buff *skb)
> >   {
> >       skb_clone_tx_timestamp(skb);
> > -     if (skb_shinfo(skb)->tx_flags & SKBTX_SW_TSTAMP)
> > +     if (skb_shinfo(skb)->tx_flags & (SKBTX_SW_TSTAMP | SKBTX_BPF))
> >               skb_tstamp_tx(skb, NULL);
> >   }
> >
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 68664ececdc0..b3bd92281084 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -7039,6 +7039,10 @@ enum {
> >                                        * dev layer when SK_BPF_CB_TX_TIMESTAMPING
> >                                        * feature is on.
> >                                        */
> > +     BPF_SOCK_OPS_TS_SW_OPT_CB,      /* Called when skb is about to send
> > +                                      * to the nic when SK_BPF_CB_TX_TIMESTAMPING
> > +                                      * feature is on.
> > +                                      */
> >   };
> >
> >   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 7bac5e950e3d..d80d2137692f 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -5557,6 +5557,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
> >   }
> >
> >   static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
> > +                                               struct skb_shared_hwtstamps *hwts,
>
> s/hwts/hwtstamps/
> Use the same argument name as all other functions in this file. Its caller is
> using hwtstamps as the argument name also. Easier to follow.
>
> Probably the same for the skb_tstamp_tx_report_so_timestamping().

Got it. Next version will include this modification.

Thanks,
Jason

>
> >                                                 struct sock *sk,
> >                                                 int tstype)
> >   {
> > @@ -5566,6 +5567,11 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
> >       case SCM_TSTAMP_SCHED:
> >               op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
> >               break;
> > +     case SCM_TSTAMP_SND:
> > +             if (hwts)
> > +                     return;
> > +             op = BPF_SOCK_OPS_TS_SW_OPT_CB;
> > +             break;
> >       default:
> >               return;
> >       }
> > @@ -5586,7 +5592,8 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
> >               return;
> >
> >       if (skb_shinfo(orig_skb)->tx_flags & SKBTX_BPF)
> > -             skb_tstamp_tx_report_bpf_timestamping(orig_skb, sk, tstype);
> > +             skb_tstamp_tx_report_bpf_timestamping(orig_skb, hwtstamps,
> > +                                                   sk, tstype);
> >
> >       if (!skb_tstamp_tx_report_so_timestamping(orig_skb, hwtstamps, tstype))
> >               return;
> > diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> > index eed91b7296b7..9bd1c7c77b17 100644
> > --- a/tools/include/uapi/linux/bpf.h
> > +++ b/tools/include/uapi/linux/bpf.h
> > @@ -7029,6 +7029,10 @@ enum {
> >                                        * dev layer when SK_BPF_CB_TX_TIMESTAMPING
> >                                        * feature is on.
> >                                        */
> > +     BPF_SOCK_OPS_TS_SW_OPT_CB,      /* Called when skb is about to send
> > +                                      * to the nic when SK_BPF_CB_TX_TIMESTAMPING
> > +                                      * feature is on.
> > +                                      */
> >   };
> >
> >   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback
  2025-02-12 23:20   ` Martin KaFai Lau
@ 2025-02-13  7:24     ` Jason Xing
  0 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-13  7:24 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On Thu, Feb 13, 2025 at 7:20 AM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
> On 2/11/25 10:18 PM, Jason Xing wrote:
> > Support hw SCM_TSTAMP_SND case for bpf timestamping.
> >
> > Add a new sock_ops callback, BPF_SOCK_OPS_TS_HW_OPT_CB. This
> > callback will occur at the same timestamping point as the user
> > space's hardware SCM_TSTAMP_SND. The BPF program can use it to
> > get the same SCM_TSTAMP_SND timestamp without modifying the
> > user-space application.
> >
> > To avoid increase the code complexity, replace SKBTX_HW_TSTAMP
> > with SKBTX_HW_TSTAMP_NOBPF instead of changing numerous callers
> > from driver side using SKBTX_HW_TSTAMP. The new definition of
> > SKBTX_HW_TSTAMP means the combination tests of socket timestamping
> > and bpf timestamping. After this patch, drivers can work under the
> > bpf timestamping.
> >
> > Considering some drivers doesn't assign the skb with hardware
> > timestamp, this patch do the assignment and then BPF program
> > can acquire the hwstamp from skb directly.
> >
> > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > ---
> >   include/linux/skbuff.h         | 4 +++-
> >   include/uapi/linux/bpf.h       | 4 ++++
> >   net/core/skbuff.c              | 6 +++---
> >   tools/include/uapi/linux/bpf.h | 4 ++++
> >   4 files changed, 14 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 76582500c5ea..0b4f1889500d 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -470,7 +470,7 @@ struct skb_shared_hwtstamps {
> >   /* Definitions for tx_flags in struct skb_shared_info */
> >   enum {
> >       /* generate hardware time stamp */
> > -     SKBTX_HW_TSTAMP = 1 << 0,
> > +     SKBTX_HW_TSTAMP_NOBPF = 1 << 0,
> >
> >       /* generate software time stamp when queueing packet to NIC */
> >       SKBTX_SW_TSTAMP = 1 << 1,
> > @@ -494,6 +494,8 @@ enum {
> >       SKBTX_BPF = 1 << 7,
> >   };
> >
> > +#define SKBTX_HW_TSTAMP              (SKBTX_HW_TSTAMP_NOBPF | SKBTX_BPF)
> > +
> >   #define SKBTX_ANY_SW_TSTAMP (SKBTX_SW_TSTAMP    | \
> >                                SKBTX_SCHED_TSTAMP | \
> >                                SKBTX_BPF)
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index b3bd92281084..f70edd067edf 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -7043,6 +7043,10 @@ enum {
> >                                        * to the nic when SK_BPF_CB_TX_TIMESTAMPING
> >                                        * feature is on.
> >                                        */
> > +     BPF_SOCK_OPS_TS_HW_OPT_CB,      /* Called in hardware phase when
> > +                                      * SK_BPF_CB_TX_TIMESTAMPING feature
> > +                                      * is on.
> > +                                      */
> >   };
> >
> >   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index d80d2137692f..4930c43ee77b 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -5547,7 +5547,7 @@ static bool skb_tstamp_tx_report_so_timestamping(struct sk_buff *skb,
> >       case SCM_TSTAMP_SCHED:
> >               return skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP;
> >       case SCM_TSTAMP_SND:
> > -             return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP :
> > +             return skb_shinfo(skb)->tx_flags & (hwts ? SKBTX_HW_TSTAMP_NOBPF :
> >                                                   SKBTX_SW_TSTAMP);
> >       case SCM_TSTAMP_ACK:
> >               return TCP_SKB_CB(skb)->txstamp_ack;
> > @@ -5568,9 +5568,9 @@ static void skb_tstamp_tx_report_bpf_timestamping(struct sk_buff *skb,
> >               op = BPF_SOCK_OPS_TS_SCHED_OPT_CB;
> >               break;
> >       case SCM_TSTAMP_SND:
> > +             op = hwts ? BPF_SOCK_OPS_TS_HW_OPT_CB : BPF_SOCK_OPS_TS_SW_OPT_CB;
>
> Remove this "hwts" test.
>
> >               if (hwts)
>
> Reuse this and do everything in this "if else" statement.

Will do it.

Thanks,
Jason

>
> > -                     return;
> > -             op = BPF_SOCK_OPS_TS_SW_OPT_CB;
> > +                     *skb_hwtstamps(skb) = *hwts;
> >               break;
> >       default:
> >               return;
> > diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> > index 9bd1c7c77b17..7b9652ce7e3c 100644
> > --- a/tools/include/uapi/linux/bpf.h
> > +++ b/tools/include/uapi/linux/bpf.h
> > @@ -7033,6 +7033,10 @@ enum {
> >                                        * to the nic when SK_BPF_CB_TX_TIMESTAMPING
> >                                        * feature is on.
> >                                        */
> > +     BPF_SOCK_OPS_TS_HW_OPT_CB,      /* Called in hardware phase when
> > +                                      * SK_BPF_CB_TX_TIMESTAMPING feature
> > +                                      * is on.
> > +                                      */
> >   };
> >
> >   /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping
  2025-02-12 23:49   ` Martin KaFai Lau
@ 2025-02-13  7:26     ` Jason Xing
  0 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-13  7:26 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On Thu, Feb 13, 2025 at 7:49 AM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
> On 2/11/25 10:18 PM, Jason Xing wrote:
> > Add the bpf_sock_ops_enable_tx_tstamp kfunc to allow BPF programs to
> > selectively enable TX timestamping on a skb during tcp_sendmsg().
> >
> > For example, BPF program will limit tracking X numbers of packets
> > and then will stop there instead of tracing all the sendmsgs of
> > matched flow all along. It would be helpful for users who cannot
> > afford to calculate latencies from every sendmsg call probably
> > due to the performance or storage space consideration.
> >
> > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > ---
> >   kernel/bpf/btf.c  |  1 +
> >   net/core/filter.c | 32 +++++++++++++++++++++++++++++++-
> >   2 files changed, 32 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > index 9433b6467bbe..740210f883dc 100644
> > --- a/kernel/bpf/btf.c
> > +++ b/kernel/bpf/btf.c
> > @@ -8522,6 +8522,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
> >       case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> >       case BPF_PROG_TYPE_CGROUP_SOCKOPT:
> >       case BPF_PROG_TYPE_CGROUP_SYSCTL:
> > +     case BPF_PROG_TYPE_SOCK_OPS:
> >               return BTF_KFUNC_HOOK_CGROUP;
> >       case BPF_PROG_TYPE_SCHED_ACT:
> >               return BTF_KFUNC_HOOK_SCHED_ACT;
> > diff --git a/net/core/filter.c b/net/core/filter.c
> > index 7f56d0bbeb00..36793c68b125 100644
> > --- a/net/core/filter.c
> > +++ b/net/core/filter.c
> > @@ -12102,6 +12102,26 @@ __bpf_kfunc int bpf_sk_assign_tcp_reqsk(struct __sk_buff *s, struct sock *sk,
> >   #endif
> >   }
> >
> > +__bpf_kfunc int bpf_sock_ops_enable_tx_tstamp(struct bpf_sock_ops_kern *skops,
> > +                                           u64 flags)
> > +{
> > +     struct sk_buff *skb;
> > +     struct sock *sk;
> > +
> > +     if (skops->op != BPF_SOCK_OPS_TS_SND_CB)
> > +             return -EOPNOTSUPP;
>
> It still needs to test the "flags" such that it can be used in the future....
>
>         if (flags)
>                 return -EINVAL;

Will add it.

> > +
> > +     skb = skops->skb;
> > +     sk = skops->sk;
> > +     skb_shinfo(skb)->tx_flags |= SKBTX_BPF;
> > +     if (sk_is_tcp(sk)) {
>
> Unnecessary check like this will only confuse reader. Remove it and revisit when
> UDP will be supported.

Okay.

Thanks,
Jason

>
> > +             TCP_SKB_CB(skb)->txstamp_ack |= TSTAMP_ACK_BPF;
> > +             skb_shinfo(skb)->tskey = TCP_SKB_CB(skb)->seq + skb->len - 1;
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> >   __bpf_kfunc_end_defs();
> >
> >   int bpf_dynptr_from_skb_rdonly(struct __sk_buff *skb, u64 flags,
> > @@ -12135,6 +12155,10 @@ BTF_KFUNCS_START(bpf_kfunc_check_set_tcp_reqsk)
> >   BTF_ID_FLAGS(func, bpf_sk_assign_tcp_reqsk, KF_TRUSTED_ARGS)
> >   BTF_KFUNCS_END(bpf_kfunc_check_set_tcp_reqsk)
> >
> > +BTF_KFUNCS_START(bpf_kfunc_check_set_sock_ops)
> > +BTF_ID_FLAGS(func, bpf_sock_ops_enable_tx_tstamp, KF_TRUSTED_ARGS)
> > +BTF_KFUNCS_END(bpf_kfunc_check_set_sock_ops)
> > +
> >   static const struct btf_kfunc_id_set bpf_kfunc_set_skb = {
> >       .owner = THIS_MODULE,
> >       .set = &bpf_kfunc_check_set_skb,
> > @@ -12155,6 +12179,11 @@ static const struct btf_kfunc_id_set bpf_kfunc_set_tcp_reqsk = {
> >       .set = &bpf_kfunc_check_set_tcp_reqsk,
> >   };
> >
> > +static const struct btf_kfunc_id_set bpf_kfunc_set_sock_ops = {
> > +     .owner = THIS_MODULE,
> > +     .set = &bpf_kfunc_check_set_sock_ops,
> > +};
> > +
> >   static int __init bpf_kfunc_init(void)
> >   {
> >       int ret;
> > @@ -12173,7 +12202,8 @@ static int __init bpf_kfunc_init(void)
> >       ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
> >       ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
> >                                              &bpf_kfunc_set_sock_addr);
> > -     return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_tcp_reqsk);
> > +     ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_kfunc_set_tcp_reqsk);
> > +     return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SOCK_OPS, &bpf_kfunc_set_sock_ops);
> >   }
> >   late_initcall(bpf_kfunc_init);
> >
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature
  2025-02-13  1:08   ` Martin KaFai Lau
@ 2025-02-13 11:31     ` Jason Xing
  0 siblings, 0 replies; 25+ messages in thread
From: Jason Xing @ 2025-02-13 11:31 UTC (permalink / raw)
  To: Martin KaFai Lau
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemdebruijn.kernel,
	willemb, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf,
	netdev

On Thu, Feb 13, 2025 at 9:08 AM Martin KaFai Lau <martin.lau@linux.dev> wrote:
>
> On 2/11/25 10:18 PM, Jason Xing wrote:
> > BPF program calculates a couple of latency deltas between each tx
> > timestamping callbacks. It can be used in the real world to diagnose
> > the kernel behaviour in the tx path.
> >
> > Check the safety issues by accessing a few bpf calls in
> > bpf_test_access_bpf_calls() which are implemented in the patch 3 and 4.
> >
> > Check if the bpf timestamping can co-exist with socket timestamping.
> >
> > There remains a few realistic things[1][2] to highlight:
> > 1. in general a packet may pass through multiple qdiscs. For instance
> > with bonding or tunnel virtual devices in the egress path.
> > 2. packets may be resent, in which case an ACK might precede a repeat
> > SCHED and SND.
> > 3. erroneous or malicious peers may also just never send an ACK.
> >
> > [1]: https://lore.kernel.org/all/67a389af981b0_14e0832949d@willemb.c.googlers.com.notmuch/
> > [2]: https://lore.kernel.org/all/c329a0c1-239b-4ca1-91f2-cb30b8dd2f6a@linux.dev/
> >
> > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > ---
> >   .../bpf/prog_tests/net_timestamping.c         | 231 +++++++++++++++++
> >   .../selftests/bpf/progs/net_timestamping.c    | 244 ++++++++++++++++++
> >   2 files changed, 475 insertions(+)
> >   create mode 100644 tools/testing/selftests/bpf/prog_tests/net_timestamping.c
> >   create mode 100644 tools/testing/selftests/bpf/progs/net_timestamping.c
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/net_timestamping.c b/tools/testing/selftests/bpf/prog_tests/net_timestamping.c
> > new file mode 100644
> > index 000000000000..dcdc40473a7d
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/net_timestamping.c
> > @@ -0,0 +1,231 @@
> > +#include <linux/net_tstamp.h>
> > +#include <sys/time.h>
> > +#include <linux/errqueue.h>
> > +#include "test_progs.h"
> > +#include "network_helpers.h"
> > +#include "net_timestamping.skel.h"
> > +
> > +#define CG_NAME "/net-timestamping-test"
> > +#define NSEC_PER_SEC    1000000000LL
> > +
> > +static const char addr4_str[] = "127.0.0.1";
> > +static const char addr6_str[] = "::1";
> > +static struct net_timestamping *skel;
> > +static int cfg_payload_len = 30;
>
> const ?

Will add it.

>
> > +static struct timespec usr_ts;
> > +static u64 delay_tolerance_nsec = 10000000000; /* 10 seconds */
> > +int SK_TS_SCHED;
> > +int SK_TS_TXSW;
> > +int SK_TS_ACK;
> > +
> > +static int64_t timespec_to_ns64(struct timespec *ts)
> > +{
> > +     return ts->tv_sec * NSEC_PER_SEC + ts->tv_nsec;
> > +}
> > +
> > +static void validate_key(int tskey, int tstype)
> > +{
> > +     static int expected_tskey = -1;
> > +
> > +     if (tstype == SCM_TSTAMP_SCHED)
> > +             expected_tskey = cfg_payload_len - 1;
> > +
> > +     ASSERT_EQ(expected_tskey, tskey, "tskey mismatch");
> > +
> > +     expected_tskey = tskey;
> > +}
> > +
> > +static void validate_timestamp(struct timespec *cur, struct timespec *prev)
> > +{
> > +     int64_t cur_ns, prev_ns;
> > +
> > +     cur_ns = timespec_to_ns64(cur);
> > +     prev_ns = timespec_to_ns64(prev);
> > +
> > +     ASSERT_TRUE((cur_ns - prev_ns) < delay_tolerance_nsec, "latency");
>
> ASSERT_LT()

Got it!

>
> > +}
> > +
> > +static void test_socket_timestamp(struct scm_timestamping *tss, int tstype,
> > +                               int tskey)
> > +{
> > +     static struct timespec *prev_ts = &usr_ts;
> > +
> > +     validate_key(tskey, tstype);
> > +
> > +     switch (tstype) {
> > +     case SCM_TSTAMP_SCHED:
> > +             validate_timestamp(&tss->ts[0], prev_ts);
> > +             SK_TS_SCHED = 1;
> > +             SK_TS_TXSW = SK_TS_ACK = 0;
> > +             break;
> > +     case SCM_TSTAMP_SND:
> > +             validate_timestamp(&tss->ts[0], prev_ts);
> > +             SK_TS_TXSW = 1;
> > +             break;
> > +     case SCM_TSTAMP_ACK:
> > +             validate_timestamp(&tss->ts[0], prev_ts);
> > +             SK_TS_ACK = 1;
> > +             break;
> > +     }
> > +
> > +     prev_ts = &tss->ts[0];
> > +}
> > +
> > +static void test_recv_errmsg_cmsg(struct msghdr *msg)
> > +{
> > +     struct sock_extended_err *serr = NULL;
> > +     struct scm_timestamping *tss = NULL;
> > +     struct cmsghdr *cm;
> > +
> > +     for (cm = CMSG_FIRSTHDR(msg);
> > +          cm && cm->cmsg_len;
> > +          cm = CMSG_NXTHDR(msg, cm)) {
> > +             if (cm->cmsg_level == SOL_SOCKET &&
> > +                 cm->cmsg_type == SCM_TIMESTAMPING) {
> > +                     tss = (void *) CMSG_DATA(cm);
> > +             } else if ((cm->cmsg_level == SOL_IP &&
> > +                         cm->cmsg_type == IP_RECVERR) ||
> > +                        (cm->cmsg_level == SOL_IPV6 &&
> > +                         cm->cmsg_type == IPV6_RECVERR) ||
> > +                        (cm->cmsg_level == SOL_PACKET &&
> > +                         cm->cmsg_type == PACKET_TX_TIMESTAMP)) {
> > +                     serr = (void *) CMSG_DATA(cm);
> > +                     ASSERT_EQ(serr->ee_origin, SO_EE_ORIGIN_TIMESTAMPING,
> > +                                 "cmsg type");
> > +             }
> > +
> > +             if (serr && tss)
> > +                     test_socket_timestamp(tss, serr->ee_info,
> > +                                           serr->ee_data);
> > +     }
> > +}
> > +
> > +static bool socket_recv_errmsg(int fd)
> > +{
> > +     static char ctrl[1024 /* overprovision*/];
> > +     char data[cfg_payload_len];
> > +     static struct msghdr msg;
> > +     struct iovec entry;
> > +     int n = 0;
> > +
> > +     memset(&msg, 0, sizeof(msg));
> > +     memset(&entry, 0, sizeof(entry));
> > +     memset(ctrl, 0, sizeof(ctrl));
> > +
> > +     entry.iov_base = data;
> > +     entry.iov_len = cfg_payload_len;
> > +     msg.msg_iov = &entry;
> > +     msg.msg_iovlen = 1;
> > +     msg.msg_name = NULL;
> > +     msg.msg_namelen = 0;
> > +     msg.msg_control = ctrl;
> > +     msg.msg_controllen = sizeof(ctrl);
> > +
> > +     n = recvmsg(fd, &msg, MSG_ERRQUEUE);
> > +     if (n == -1)
> > +             ASSERT_EQ(errno, EAGAIN, "recvmsg MSG_ERRQUEUE");
> > +
> > +     if (n >= 0)
> > +             test_recv_errmsg_cmsg(&msg);
> > +
> > +     return n == -1;
> > +
> > +}
> > +
> > +static void test_socket_timestamping(int fd)
> > +{
> > +     while (!socket_recv_errmsg(fd));
> > +
> > +     ASSERT_EQ(SK_TS_SCHED, 1, "SCM_TSTAMP_SCHED");
> > +     ASSERT_EQ(SK_TS_TXSW, 1, "SCM_TSTAMP_SND");
> > +     ASSERT_EQ(SK_TS_ACK, 1, "SCM_TSTAMP_ACK");
> > +}
> > +
> > +static void test_tcp(int family)
> > +{
> > +     struct net_timestamping__bss *bss = skel->bss;
> > +     char buf[cfg_payload_len];
> > +     int sfd = -1, cfd = -1;
> > +     unsigned int sock_opt;
> > +     int ret;
> > +
> > +     memset(bss, 0, sizeof(*bss));
>
> No need to reset some of the new global variables, e.g. SK_TS_SCHED?

I thought I had handled it well, but the fact is .... Will fix it.

>
> > +
> > +     sfd = start_server(family, SOCK_STREAM,
> > +                        family == AF_INET6 ? addr6_str : addr4_str, 0, 0);
> > +     if (!ASSERT_OK_FD(sfd, "start_server"))
> > +             goto out;
> > +
> > +     cfd = connect_to_fd(sfd, 0);
> > +     if (!ASSERT_OK_FD(cfd, "connect_to_fd_server"))
> > +             goto out;
> > +
> > +     sock_opt = SOF_TIMESTAMPING_SOFTWARE |
> > +                SOF_TIMESTAMPING_OPT_ID |
> > +                SOF_TIMESTAMPING_TX_SCHED |
> > +                SOF_TIMESTAMPING_TX_SOFTWARE |
> > +                SOF_TIMESTAMPING_TX_ACK;
> > +     ret = setsockopt(cfd, SOL_SOCKET, SO_TIMESTAMPING,
> > +                      (char *) &sock_opt, sizeof(sock_opt));
>
> It also needs the original test in v9 to check the bpf timestamping works
> without the user space's SO_TIMESTAMPING, which is the major use case of this
> series.
>
> It should be easy to do by conditionally enabling the SO_TIMESTAMPING here.

Agreed.

>
> > +     if (!ASSERT_OK(ret, "setsockopt SO_TIMESTAMPING"))
> > +             goto out;
> > +
> > +     ret = clock_gettime(CLOCK_REALTIME, &usr_ts);
> > +     if (!ASSERT_OK(ret, "get user time"))
> > +             goto out;
> > +
> > +     ret = write(cfd, buf, sizeof(buf));
> > +     if (!ASSERT_EQ(ret, sizeof(buf), "send to server"))
> > +             goto out;
> > +
> > +     /* Test if socket timestamping works correctly even with bpf
> > +      * extension enabled.
> > +      */
> > +     test_socket_timestamping(cfd);
> > +
> > +     ASSERT_EQ(bss->nr_active, 1, "nr_active");
> > +     ASSERT_EQ(bss->nr_snd, 2, "nr_snd");
> > +     ASSERT_EQ(bss->nr_sched, 1, "nr_sched");
> > +     ASSERT_EQ(bss->nr_txsw, 1, "nr_txsw");
> > +     ASSERT_EQ(bss->nr_ack, 1, "nr_ack");
> > +
> > +out:
> > +     if (sfd >= 0)
> > +             close(sfd);
> > +     if (cfd >= 0)
> > +             close(cfd);
> > +}
> > +
> > +void test_net_timestamping(void)
> > +{
> > +     struct netns_obj *ns;
> > +     int cg_fd;
> > +
> > +     cg_fd = test__join_cgroup(CG_NAME);
> > +     if (!ASSERT_OK_FD(cg_fd, "join cgroup"))
> > +             return;
> > +
> > +     ns = netns_new("net_timestamping_ns", true);
> > +     if (!ASSERT_OK_PTR(ns, "create ns"))
> > +             goto done;
> > +
> > +     skel = net_timestamping__open_and_load();
> > +     if (!ASSERT_OK_PTR(skel, "open and load skel"))
> > +             goto done;
> > +
> > +     if (!ASSERT_OK(net_timestamping__attach(skel), "attach skel"))
> > +             goto done;
> > +
> > +     skel->links.skops_sockopt =
> > +             bpf_program__attach_cgroup(skel->progs.skops_sockopt, cg_fd);
> > +     if (!ASSERT_OK_PTR(skel->links.skops_sockopt, "attach cgroup"))
> > +             goto done;
> > +
> > +     test_tcp(AF_INET6);
> > +     test_tcp(AF_INET);
>
> Considering the w and w/o SO_TIMESTAMPING combinations (i.e. x2), it is worth to
> have proper subtests. It is easy also. Take a look at the test__start_subtest()
> usage `under the prog_tests/.

Will use it :)

>
> > +
> > +done:
> > +     net_timestamping__destroy(skel);
> > +     netns_free(ns);
> > +     close(cg_fd);
> > +}
> > diff --git a/tools/testing/selftests/bpf/progs/net_timestamping.c b/tools/testing/selftests/bpf/progs/net_timestamping.c
> > new file mode 100644
> > index 000000000000..d3e1da599626
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/net_timestamping.c
> > @@ -0,0 +1,244 @@
> > +#include "vmlinux.h"
> > +#include "bpf_tracing_net.h"
> > +#include <bpf/bpf_helpers.h>
> > +#include <bpf/bpf_tracing.h>
> > +#include "bpf_misc.h"
> > +#include "bpf_kfuncs.h"
> > +#include <errno.h>
> > +
> > +#define SK_BPF_CB_FLAGS 1009
> > +#define SK_BPF_CB_TX_TIMESTAMPING 1
>
> Remove these two defines. The vmlinux.h has it.

Will do it.

>
> [ ... ]
>
> > +SEC("fentry/tcp_sendmsg_locked")
> > +int BPF_PROG(trace_tcp_sendmsg_locked, struct sock *sk, struct msghdr *msg, size_t size)
> > +{
> > +     u64 timestamp = bpf_ktime_get_ns();
> > +     u32 flag = sk->sk_bpf_cb_flags;
> > +     struct sk_stg *stg;
> > +
> > +     if (!flag)
>
> I just noticed this one.
>
> Lets replace the "flag" check with a better check (e.g. pid check used in other
> tests). Then it won't affect sk of other tests running in parallel.
>
> It is pretty easy. Take a look at how bpf_get_current_pid_tgid() is used in
> progs/local_storage.c.

Thanks, I would use "if (pid != monitored_pid || !flag)" to test.

I've already made the changes as you suggested and it works. Thanks. I
will do more rounds of tests.

>
>
> > +             return 0;
> > +
> > +     stg = bpf_sk_storage_get(&sk_stg_map, sk, 0,
> > +                              BPF_SK_STORAGE_GET_F_CREATE);
> > +     if (!stg)
> > +             return 0;
> > +
> > +     stg->sendmsg_ns = timestamp;
> > +     nr_snd += 1;
> > +     return 0;
> > +}
> > +

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback
  2025-02-13  7:23       ` Jason Xing
@ 2025-02-13 15:09         ` Willem de Bruijn
  0 siblings, 0 replies; 25+ messages in thread
From: Willem de Bruijn @ 2025-02-13 15:09 UTC (permalink / raw)
  To: Jason Xing, Willem de Bruijn
  Cc: davem, edumazet, kuba, pabeni, dsahern, willemb, ast, daniel,
	andrii, martin.lau, eddyz87, song, yonghong.song, john.fastabend,
	kpsingh, sdf, haoluo, jolsa, shuah, ykolal, bpf, netdev

Jason Xing wrote:
> On Thu, Feb 13, 2025 at 8:07 AM Jason Xing <kerneljasonxing@gmail.com> wrote:
> >
> > On Wed, Feb 12, 2025 at 11:26 PM Willem de Bruijn
> > <willemdebruijn.kernel@gmail.com> wrote:
> > >
> > > Jason Xing wrote:
> > > > Support the ACK case for bpf timestamping.
> > > >
> > > > Add a new sock_ops callback, BPF_SOCK_OPS_TS_ACK_OPT_CB. This
> > > > callback will occur at the same timestamping point as the user
> > > > space's SCM_TSTAMP_ACK. The BPF program can use it to get the
> > > > same SCM_TSTAMP_ACK timestamp without modifying the user-space
> > > > application.
> > > >
> > > > This patch extends txstamp_ack to two bits: 1 stands for
> > > > SO_TIMESTAMPING mode, 2 bpf extension.
> > > >
> > > > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > > > ---
> > > >  include/net/tcp.h              | 6 ++++--
> > > >  include/uapi/linux/bpf.h       | 5 +++++
> > > >  net/core/skbuff.c              | 5 ++++-
> > > >  net/dsa/user.c                 | 2 +-
> > > >  net/ipv4/tcp.c                 | 2 +-
> > > >  net/socket.c                   | 2 +-
> > > >  tools/include/uapi/linux/bpf.h | 5 +++++
> > > >  7 files changed, 21 insertions(+), 6 deletions(-)
> > >
> > > > diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> > > > index 0d704bda6c41..aa080f7ccea4 100644
> > > > --- a/net/ipv4/tcp.c
> > > > +++ b/net/ipv4/tcp.c
> > > > @@ -488,7 +488,7 @@ static void tcp_tx_timestamp(struct sock *sk, struct sockcm_cookie *sockc)
> > > >
> > > >               sock_tx_timestamp(sk, sockc, &shinfo->tx_flags);
> > > >               if (tsflags & SOF_TIMESTAMPING_TX_ACK)
> > > > -                     tcb->txstamp_ack = 1;
> > > > +                     tcb->txstamp_ack = TSTAMP_ACK_SK;
> > >
> > > Similar to the BPF code, should this by |= TSTAMP_ACK_SK?
> > >
> > > Does not matter in practice if the BPF setter can never precede this.
> >
> > I gave the same thought on this too. We've already fixed the position
> > and order (of using socket timestamping and bpf timestamping).
> >
> > I have no strong preference. If you insist, I can surely adjust it.
> 
> I updated it in the next version locally :)

Great. I was going to say that I do prefer this.

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2025-02-13 15:09 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-12  6:18 [PATCH bpf-next v10 00/12] net-timestamp: bpf extension to equip applications transparently Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 01/12] bpf: add networking timestamping support to bpf_get/setsockopt() Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 02/12] bpf: prepare the sock_ops ctx and call bpf prog for TX timestamping Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 03/12] bpf: prevent unsafe access to the sock fields in the BPF timestamping callback Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 04/12] bpf: disable unsafe helpers in TX timestamping callbacks Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 05/12] net-timestamp: prepare for isolating two modes of SO_TIMESTAMPING Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 06/12] bpf: add BPF_SOCK_OPS_TS_SCHED_OPT_CB callback Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 07/12] bpf: add BPF_SOCK_OPS_TS_SW_OPT_CB callback Jason Xing
2025-02-12 23:18   ` Martin KaFai Lau
2025-02-13  7:24     ` Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 08/12] bpf: add BPF_SOCK_OPS_TS_HW_OPT_CB callback Jason Xing
2025-02-12 23:20   ` Martin KaFai Lau
2025-02-13  7:24     ` Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 09/12] bpf: add BPF_SOCK_OPS_TS_ACK_OPT_CB callback Jason Xing
2025-02-12 15:26   ` Willem de Bruijn
2025-02-13  0:07     ` Jason Xing
2025-02-13  7:23       ` Jason Xing
2025-02-13 15:09         ` Willem de Bruijn
2025-02-12  6:18 ` [PATCH bpf-next v10 10/12] bpf: add BPF_SOCK_OPS_TS_SND_CB callback Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 11/12] bpf: support selective sampling for bpf timestamping Jason Xing
2025-02-12 23:49   ` Martin KaFai Lau
2025-02-13  7:26     ` Jason Xing
2025-02-12  6:18 ` [PATCH bpf-next v10 12/12] selftests/bpf: add simple bpf tests in the tx path for timestamping feature Jason Xing
2025-02-13  1:08   ` Martin KaFai Lau
2025-02-13 11:31     ` Jason Xing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).