All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock
@ 2026-05-09  6:44 Gang Yan
  2026-05-09  6:44 ` [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt Gang Yan
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Gang Yan @ 2026-05-09  6:44 UTC (permalink / raw)
  To: MPTCP Linux; +Cc: Gang Yan

This small series makes the MPTCP sockopt codepaths consistent with
TCP and the core socket layer by using the BPF-aware
sockopt_lock_sock()/sockopt_release_sock() helpers introduced in
commit 24426654ed3a ("bpf: net: Avoid sk_setsockopt() taking sk lock
when called from bpf").

Patch 1 switches all lock_sock()/release_sock()/lock_sock_fast() calls
in MPTCP sockopt handlers to use the BPF-aware wrappers, avoiding the
risk of sleeping in atomic context when lock_sock_fast() is used.

Patch 2 switches ns_capable() to sockopt_ns_capable() in the
congestion control setsockopt path, properly handling the case where
BPF programs invoke setsockopt from softirq context.

Both patches are fixes that should have been part of the original
BPF sockopt series.

Changelog:
v4:
  - As sashiko said, when processing BPF setsockopt requests, the
    msk is already locked, but we need to use lock_sock() to
    protect ssk. If we use sockopt_lock_sock(ssk), it will return
    without acquiring the lock.
  - In 'mptcp_setsockopt_sol_tcp_congestion', the load of
    'tcp_set_congestion_control' is changed from 'true' to
    '!has_current_bpf_ctx()' like tcp does. This determines whether
    tcp_ca_find() or tcp_ca_find_autoload() is called. I agree we
    should keep consistent with the TCP implementation. 

v3:
  - Remove the special symbols in v2.
  - Use sockopt_ns_capable to replace ns_capable.

v2:
  Link: https://patchwork.kernel.org/project/mptcp/patch/20260422091927.77770-3-gang.yan@linux.dev/

Signed-off-by: Gang Yan <yangang@kylinos.cn>
---
Changes in v4:
- EDITME: describe what is new in this series revision.
- EDITME: use bulletpoints and terse descriptions.
- Link to v3: https://lore.kernel.org/r/20260506-sockopt_lock-v3-0-06bd417c6d63@kylinos.cn

---
Gang Yan (2):
      mptcp: use sockopt_lock(release)_sock in sockopt
      mptcp: use sockopt_ns_capable() in setsockopt congestion control

 net/mptcp/sockopt.c | 101 ++++++++++++++++++++++++++--------------------------
 1 file changed, 50 insertions(+), 51 deletions(-)
---
base-commit: aa15c271d79edde595fb6f4eedb52fbc16325a83
change-id: 20260506-sockopt_lock-c46837d6d9d7

Best regards,
--  
Gang Yan <yangang@kylinos.cn>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt
  2026-05-09  6:44 [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock Gang Yan
@ 2026-05-09  6:44 ` Gang Yan
  2026-05-15  1:27   ` gang.yan
  2026-05-09  6:44 ` [PATCH mptcp-net v4 2/2] mptcp: use sockopt_ns_capable() in setsockopt congestion control Gang Yan
  2026-05-09  7:47 ` [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock MPTCP CI
  2 siblings, 1 reply; 8+ messages in thread
From: Gang Yan @ 2026-05-09  6:44 UTC (permalink / raw)
  To: MPTCP Linux; +Cc: Gang Yan

From: Gang Yan <yangang@kylinos.cn>

TCP and the core socket layer all use sockopt_lock_sock()
sockopt_release_sock() in their setsockopt and getsockopt handlers. It
is a BPF-aware wrapper that skips lock acquisition when invoked from a
BPF program, where the socket lock is already held.

Using lock_sock_fast() on subflows requires extra care: the fast path
holds the socket spinlock with BH disabled, creating an atomic context
where sleeping is not allowed.  Switching to lock_sock() avoids the
risk of accidentally introducing sleeping operations inside the
lock_sock_fast() critical section.

Fixes: 24426654ed3a ("bpf: net: Avoid sk_setsockopt() taking sk lock when called from bpf")
Signed-off-by: Gang Yan <yangang@kylinos.cn>
---
 net/mptcp/sockopt.c | 97 ++++++++++++++++++++++++++---------------------------
 1 file changed, 48 insertions(+), 49 deletions(-)

diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 1cf608e7357b..57ffde0d0b87 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -72,12 +72,12 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
 	struct mptcp_subflow_context *subflow;
 	struct sock *sk = (struct sock *)msk;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	sockopt_seq_inc(msk);
 
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
-		bool slow = lock_sock_fast(ssk);
+		lock_sock(ssk);
 
 		switch (optname) {
 		case SO_DEBUG:
@@ -114,10 +114,10 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
 		}
 
 		subflow->setsockopt_seq = msk->setsockopt_seq;
-		unlock_sock_fast(ssk, slow);
+		release_sock(ssk);
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 }
 
 static int mptcp_sol_socket_intval(struct mptcp_sock *msk, int optname, int val)
@@ -156,7 +156,7 @@ static int mptcp_setsockopt_sol_socket_tstamp(struct mptcp_sock *msk, int optnam
 	if (ret)
 		return ret;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
 
@@ -165,7 +165,7 @@ static int mptcp_setsockopt_sol_socket_tstamp(struct mptcp_sock *msk, int optnam
 		release_sock(ssk);
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return 0;
 }
 
@@ -231,7 +231,7 @@ static int mptcp_setsockopt_sol_socket_timestamping(struct mptcp_sock *msk,
 	if (ret)
 		return ret;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@@ -241,7 +241,7 @@ static int mptcp_setsockopt_sol_socket_timestamping(struct mptcp_sock *msk,
 		release_sock(ssk);
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 
 	return 0;
 }
@@ -266,11 +266,11 @@ static int mptcp_setsockopt_sol_socket_linger(struct mptcp_sock *msk, sockptr_t
 	if (ret)
 		return ret;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	sockopt_seq_inc(msk);
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
-		bool slow = lock_sock_fast(ssk);
+		lock_sock(ssk);
 
 		if (!ling.l_onoff) {
 			sock_reset_flag(ssk, SOCK_LINGER);
@@ -280,10 +280,10 @@ static int mptcp_setsockopt_sol_socket_linger(struct mptcp_sock *msk, sockptr_t
 		}
 
 		subflow->setsockopt_seq = msk->setsockopt_seq;
-		unlock_sock_fast(ssk, slow);
+		release_sock(ssk);
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return 0;
 }
 
@@ -299,10 +299,10 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
 	case SO_REUSEADDR:
 	case SO_BINDTODEVICE:
 	case SO_BINDTOIFINDEX:
-		lock_sock(sk);
+		sockopt_lock_sock(sk);
 		ssk = __mptcp_nmpc_sk(msk);
 		if (IS_ERR(ssk)) {
-			release_sock(sk);
+			sockopt_release_sock(sk);
 			return PTR_ERR(ssk);
 		}
 
@@ -317,7 +317,7 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
 			else if (optname == SO_BINDTOIFINDEX)
 				sk->sk_bound_dev_if = ssk->sk_bound_dev_if;
 		}
-		release_sock(sk);
+		sockopt_release_sock(sk);
 		return ret;
 	case SO_KEEPALIVE:
 	case SO_PRIORITY:
@@ -395,16 +395,16 @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname,
 	case IPV6_V6ONLY:
 	case IPV6_TRANSPARENT:
 	case IPV6_FREEBIND:
-		lock_sock(sk);
+		sockopt_lock_sock(sk);
 		ssk = __mptcp_nmpc_sk(msk);
 		if (IS_ERR(ssk)) {
-			release_sock(sk);
+			sockopt_release_sock(sk);
 			return PTR_ERR(ssk);
 		}
 
 		ret = tcp_setsockopt(ssk, SOL_IPV6, optname, optval, optlen);
 		if (ret != 0) {
-			release_sock(sk);
+			sockopt_release_sock(sk);
 			return ret;
 		}
 
@@ -424,7 +424,7 @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname,
 			break;
 		}
 
-		release_sock(sk);
+		sockopt_release_sock(sk);
 		break;
 	}
 
@@ -601,7 +601,7 @@ static int mptcp_setsockopt_sol_tcp_congestion(struct mptcp_sock *msk, sockptr_t
 	cap_net_admin = ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN);
 
 	ret = 0;
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	sockopt_seq_inc(msk);
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@@ -618,7 +618,7 @@ static int mptcp_setsockopt_sol_tcp_congestion(struct mptcp_sock *msk, sockptr_t
 	if (ret == 0)
 		strscpy(msk->ca_name, name, sizeof(msk->ca_name));
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return ret;
 }
 
@@ -697,11 +697,11 @@ static int mptcp_setsockopt_sol_ip_set(struct mptcp_sock *msk, int optname,
 	if (err != 0)
 		return err;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 
 	ssk = __mptcp_nmpc_sk(msk);
 	if (IS_ERR(ssk)) {
-		release_sock(sk);
+		sockopt_release_sock(sk);
 		return PTR_ERR(ssk);
 	}
 
@@ -722,13 +722,13 @@ static int mptcp_setsockopt_sol_ip_set(struct mptcp_sock *msk, int optname,
 			   READ_ONCE(inet_sk(sk)->local_port_range));
 		break;
 	default:
-		release_sock(sk);
+		sockopt_release_sock(sk);
 		WARN_ON_ONCE(1);
 		return -EOPNOTSUPP;
 	}
 
 	sockopt_seq_inc(msk);
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return 0;
 }
 
@@ -744,18 +744,17 @@ static int mptcp_setsockopt_v4_set_tos(struct mptcp_sock *msk, int optname,
 	if (err != 0)
 		return err;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	sockopt_seq_inc(msk);
 	val = READ_ONCE(inet_sk(sk)->tos);
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
-		bool slow;
 
-		slow = lock_sock_fast(ssk);
+		lock_sock(ssk);
 		__ip_sock_set_tos(ssk, val);
-		unlock_sock_fast(ssk, slow);
+		release_sock(ssk);
 	}
-	release_sock(sk);
+	sockopt_release_sock(sk);
 
 	return 0;
 }
@@ -784,7 +783,7 @@ static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
 	int ret;
 
 	/* Limit to first subflow, before the connection establishment */
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	ssk = __mptcp_nmpc_sk(msk);
 	if (IS_ERR(ssk)) {
 		ret = PTR_ERR(ssk);
@@ -794,7 +793,7 @@ static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
 	ret = tcp_setsockopt(ssk, level, optname, optval, optlen);
 
 unlock:
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return ret;
 }
 
@@ -846,7 +845,7 @@ static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
 	if (ret)
 		return ret;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	switch (optname) {
 	case TCP_INQ:
 		if (val < 0 || val > 1)
@@ -889,7 +888,7 @@ static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
 		ret = -ENOPROTOOPT;
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return ret;
 }
 
@@ -913,9 +912,9 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
 	 * is in TCP fallback, when TCP socket options are passed through
 	 * to the one remaining subflow.
 	 */
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	ssk = __mptcp_tcp_fallback(msk);
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	if (ssk)
 		return tcp_setsockopt(ssk, level, optname, optval, optlen);
 
@@ -938,7 +937,7 @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
 	struct sock *ssk;
 	int ret;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	ssk = msk->first;
 	if (ssk)
 		goto get;
@@ -953,7 +952,7 @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
 	ret = tcp_getsockopt(ssk, level, optname, optval, optlen);
 
 out:
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return ret;
 }
 
@@ -1124,7 +1123,7 @@ static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
 
 	infoptr = optval + sfd.size_subflow_data;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@@ -1137,7 +1136,7 @@ static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
 			tcp_get_info(ssk, &info);
 
 			if (copy_to_user(infoptr, &info, sfd.size_user)) {
-				release_sock(sk);
+				sockopt_release_sock(sk);
 				return -EFAULT;
 			}
 
@@ -1147,7 +1146,7 @@ static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
 		}
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 
 	sfd.num_subflows = sfcount;
 
@@ -1216,7 +1215,7 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
 
 	addrptr = optval + sfd.size_subflow_data;
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@@ -1229,7 +1228,7 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
 			mptcp_get_sub_addrs(ssk, &a);
 
 			if (copy_to_user(addrptr, &a, sfd.size_user)) {
-				release_sock(sk);
+				sockopt_release_sock(sk);
 				return -EFAULT;
 			}
 
@@ -1239,7 +1238,7 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
 		}
 	}
 
-	release_sock(sk);
+	sockopt_release_sock(sk);
 
 	sfd.num_subflows = sfcount;
 
@@ -1325,7 +1324,7 @@ static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optva
 				     sizeof(struct mptcp_subflow_info));
 	tcpinfoptr = u64_to_user_ptr(mfi.tcp_info);
 
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	mptcp_for_each_subflow(msk, subflow) {
 		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
 		struct mptcp_subflow_info sfinfo;
@@ -1355,7 +1354,7 @@ static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optva
 		tcpinfoptr += mfi.size_tcpinfo_user;
 		sfinfoptr += mfi.size_sfinfo_user;
 	}
-	release_sock(sk);
+	sockopt_release_sock(sk);
 
 	mfi.num_subflows = sfcount;
 	if (mptcp_put_full_info(&mfi, optval, copylen, optlen))
@@ -1364,7 +1363,7 @@ static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optva
 	return 0;
 
 fail_release:
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	return -EFAULT;
 }
 
@@ -1519,9 +1518,9 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
 	 * is in TCP fallback, when socket options are passed through
 	 * to the one remaining subflow.
 	 */
-	lock_sock(sk);
+	sockopt_lock_sock(sk);
 	ssk = __mptcp_tcp_fallback(msk);
-	release_sock(sk);
+	sockopt_release_sock(sk);
 	if (ssk)
 		return tcp_getsockopt(ssk, level, optname, optval, option);
 

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH mptcp-net v4 2/2] mptcp: use sockopt_ns_capable() in setsockopt congestion control
  2026-05-09  6:44 [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock Gang Yan
  2026-05-09  6:44 ` [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt Gang Yan
@ 2026-05-09  6:44 ` Gang Yan
  2026-05-09  7:47 ` [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock MPTCP CI
  2 siblings, 0 replies; 8+ messages in thread
From: Gang Yan @ 2026-05-09  6:44 UTC (permalink / raw)
  To: MPTCP Linux; +Cc: Gang Yan

From: Gang Yan <yangang@kylinos.cn>

When a BPF program calls bpf_setsockopt(), it may run in softirq
context where ns_capable() is not appropriate as there is no valid
credential context.  Use sockopt_ns_capable() instead, which skips
the capability check when invoked from a BPF program.

Additionally, the load is changed from 'true' to
'!has_current_bpf_ctx()' like tcp does.

Fixes: e42c7beee71d ("bpf: net: Consider has_current_bpf_ctx() when testing capable() in sk_setsockopt()")
Signed-off-by: Gang Yan <yangang@kylinos.cn>
---
 net/mptcp/sockopt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 57ffde0d0b87..8535beaaf6b4 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -598,7 +598,7 @@ static int mptcp_setsockopt_sol_tcp_congestion(struct mptcp_sock *msk, sockptr_t
 
 	name[ret] = 0;
 
-	cap_net_admin = ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN);
+	cap_net_admin = sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN);
 
 	ret = 0;
 	sockopt_lock_sock(sk);
@@ -608,7 +608,7 @@ static int mptcp_setsockopt_sol_tcp_congestion(struct mptcp_sock *msk, sockptr_t
 		int err;
 
 		lock_sock(ssk);
-		err = tcp_set_congestion_control(ssk, name, true, cap_net_admin);
+		err = tcp_set_congestion_control(ssk, name, !has_current_bpf_ctx(), cap_net_admin);
 		if (err < 0 && ret == 0)
 			ret = err;
 		subflow->setsockopt_seq = msk->setsockopt_seq;

-- 
2.43.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock
  2026-05-09  6:44 [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock Gang Yan
  2026-05-09  6:44 ` [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt Gang Yan
  2026-05-09  6:44 ` [PATCH mptcp-net v4 2/2] mptcp: use sockopt_ns_capable() in setsockopt congestion control Gang Yan
@ 2026-05-09  7:47 ` MPTCP CI
  2 siblings, 0 replies; 8+ messages in thread
From: MPTCP CI @ 2026-05-09  7:47 UTC (permalink / raw)
  To: Gang Yan; +Cc: mptcp

Hi Gang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join): Success! ✅
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Success! ✅
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/25594727876

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/81272fb3577a
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1091935


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt
  2026-05-09  6:44 ` [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt Gang Yan
@ 2026-05-15  1:27   ` gang.yan
  2026-05-15  9:06     ` Paolo Abeni
  0 siblings, 1 reply; 8+ messages in thread
From: gang.yan @ 2026-05-15  1:27 UTC (permalink / raw)
  To: MPTCP Linux; +Cc: Gang Yan

May 9, 2026 at 2:44 PM, "Gang Yan" <gang.yan@linux.dev mailto:gang.yan@linux.dev?to=%22Gang%20Yan%22%20%3Cgang.yan%40linux.dev%3E > wrote:


> 
> From: Gang Yan <yangang@kylinos.cn>
> 
> TCP and the core socket layer all use sockopt_lock_sock()
> sockopt_release_sock() in their setsockopt and getsockopt handlers. It
> is a BPF-aware wrapper that skips lock acquisition when invoked from a
> BPF program, where the socket lock is already held.
> 
> Using lock_sock_fast() on subflows requires extra care: the fast path
> holds the socket spinlock with BH disabled, creating an atomic context
> where sleeping is not allowed. Switching to lock_sock() avoids the
> risk of accidentally introducing sleeping operations inside the
> lock_sock_fast() critical section.
> 
> Fixes: 24426654ed3a ("bpf: net: Avoid sk_setsockopt() taking sk lock when called from bpf")
> Signed-off-by: Gang Yan <yangang@kylinos.cn>
> ---
>  net/mptcp/sockopt.c | 97 ++++++++++++++++++++++++++---------------------------
>  1 file changed, 48 insertions(+), 49 deletions(-)
> 
> diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
> index 1cf608e7357b..57ffde0d0b87 100644
> --- a/net/mptcp/sockopt.c
> +++ b/net/mptcp/sockopt.c
> @@ -72,12 +72,12 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
>  struct mptcp_subflow_context *subflow;
>  struct sock *sk = (struct sock *)msk;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  sockopt_seq_inc(msk);
>  
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> - bool slow = lock_sock_fast(ssk);

Hi, Maintainers:

Thanks for the review of sashiko. Let me address these two points.

First, I think this patch does not introduce issues for existing MPTCP code.
Currently there is no scenario where a BPF program invokes the MPTCP
setsockopt/getsockopt handler — BPF helpers (bpf_setsockopt) bypass the
protocol-specific handler via __bpf_setsockopt(), and the cgroup BPF sockopt
filter clears the BPF context before calling ops->setsockopt(). So
has_current_bpf_ctx() is always false when the MPTCP handler executes. The
commit's primary purpose, as stated in the commit message, is to replace
lock_sock_fast() with lock_sock() to avoid sleeping within the
lock_sock_fast() critical section.


The subflow locking concern is valid for future BPF support.

That said, the observation about the subflow locking gap is a valid design
concern. As sashiko pointed out, when the MPTCP setsockopt handler is
eventually called from BPF context, the MPTCP meta socket (msk) is already
locked externally, but the subflow sockets (ssk) are not. The internal
delegation via sk_setsockopt(ssk, ...) or tcp_setsockopt(ssk, ...) calls
sockopt_lock_sock(ssk) which skips the lock due to the per-thread
has_current_bpf_ctx() flag, leaving ssk unprotected.

However, solving this is not straightforward. In BPF context we don't have a
good way to call lock_sock() on the subflow — as sashiko noted in first point,
lock_sock() unconditionally calls might_sleep(), and will cause 'sleeping in
atomic context' problems in BPF context. So properly locking the subflow in
BPF context remains an open question that needs to be addressed when MPTCP
BPF setsockopt support is actually introduced.

WDYT?

Thanks,
Gang

> + lock_sock(ssk);
>  
>  switch (optname) {
>  case SO_DEBUG:
> @@ -114,10 +114,10 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
>  }
>  
>  subflow->setsockopt_seq = msk->setsockopt_seq;
> - unlock_sock_fast(ssk, slow);
> + release_sock(ssk);
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  }
>  
>  static int mptcp_sol_socket_intval(struct mptcp_sock *msk, int optname, int val)
> @@ -156,7 +156,7 @@ static int mptcp_setsockopt_sol_socket_tstamp(struct mptcp_sock *msk, int optnam
>  if (ret)
>  return ret;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
>  
> @@ -165,7 +165,7 @@ static int mptcp_setsockopt_sol_socket_tstamp(struct mptcp_sock *msk, int optnam
>  release_sock(ssk);
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return 0;
>  }
>  
> @@ -231,7 +231,7 @@ static int mptcp_setsockopt_sol_socket_timestamping(struct mptcp_sock *msk,
>  if (ret)
>  return ret;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> @@ -241,7 +241,7 @@ static int mptcp_setsockopt_sol_socket_timestamping(struct mptcp_sock *msk,
>  release_sock(ssk);
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  
>  return 0;
>  }
> @@ -266,11 +266,11 @@ static int mptcp_setsockopt_sol_socket_linger(struct mptcp_sock *msk, sockptr_t
>  if (ret)
>  return ret;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  sockopt_seq_inc(msk);
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> - bool slow = lock_sock_fast(ssk);
> + lock_sock(ssk);
>  
>  if (!ling.l_onoff) {
>  sock_reset_flag(ssk, SOCK_LINGER);
> @@ -280,10 +280,10 @@ static int mptcp_setsockopt_sol_socket_linger(struct mptcp_sock *msk, sockptr_t
>  }
>  
>  subflow->setsockopt_seq = msk->setsockopt_seq;
> - unlock_sock_fast(ssk, slow);
> + release_sock(ssk);
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return 0;
>  }
>  
> @@ -299,10 +299,10 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
>  case SO_REUSEADDR:
>  case SO_BINDTODEVICE:
>  case SO_BINDTOIFINDEX:
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  ssk = __mptcp_nmpc_sk(msk);
>  if (IS_ERR(ssk)) {
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return PTR_ERR(ssk);
>  }
>  
> @@ -317,7 +317,7 @@ static int mptcp_setsockopt_sol_socket(struct mptcp_sock *msk, int optname,
>  else if (optname == SO_BINDTOIFINDEX)
>  sk->sk_bound_dev_if = ssk->sk_bound_dev_if;
>  }
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return ret;
>  case SO_KEEPALIVE:
>  case SO_PRIORITY:
> @@ -395,16 +395,16 @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname,
>  case IPV6_V6ONLY:
>  case IPV6_TRANSPARENT:
>  case IPV6_FREEBIND:
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  ssk = __mptcp_nmpc_sk(msk);
>  if (IS_ERR(ssk)) {
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return PTR_ERR(ssk);
>  }
>  
>  ret = tcp_setsockopt(ssk, SOL_IPV6, optname, optval, optlen);
>  if (ret != 0) {
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return ret;
>  }
>  
> @@ -424,7 +424,7 @@ static int mptcp_setsockopt_v6(struct mptcp_sock *msk, int optname,
>  break;
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  break;
>  }
>  
> @@ -601,7 +601,7 @@ static int mptcp_setsockopt_sol_tcp_congestion(struct mptcp_sock *msk, sockptr_t
>  cap_net_admin = ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN);
>  
>  ret = 0;
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  sockopt_seq_inc(msk);
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> @@ -618,7 +618,7 @@ static int mptcp_setsockopt_sol_tcp_congestion(struct mptcp_sock *msk, sockptr_t
>  if (ret == 0)
>  strscpy(msk->ca_name, name, sizeof(msk->ca_name));
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return ret;
>  }
>  
> @@ -697,11 +697,11 @@ static int mptcp_setsockopt_sol_ip_set(struct mptcp_sock *msk, int optname,
>  if (err != 0)
>  return err;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  
>  ssk = __mptcp_nmpc_sk(msk);
>  if (IS_ERR(ssk)) {
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return PTR_ERR(ssk);
>  }
>  
> @@ -722,13 +722,13 @@ static int mptcp_setsockopt_sol_ip_set(struct mptcp_sock *msk, int optname,
>  READ_ONCE(inet_sk(sk)->local_port_range));
>  break;
>  default:
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  WARN_ON_ONCE(1);
>  return -EOPNOTSUPP;
>  }
>  
>  sockopt_seq_inc(msk);
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return 0;
>  }
>  
> @@ -744,18 +744,17 @@ static int mptcp_setsockopt_v4_set_tos(struct mptcp_sock *msk, int optname,
>  if (err != 0)
>  return err;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  sockopt_seq_inc(msk);
>  val = READ_ONCE(inet_sk(sk)->tos);
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> - bool slow;
>  
> - slow = lock_sock_fast(ssk);
> + lock_sock(ssk);
>  __ip_sock_set_tos(ssk, val);
> - unlock_sock_fast(ssk, slow);
> + release_sock(ssk);
>  }
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  
>  return 0;
>  }
> @@ -784,7 +783,7 @@ static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
>  int ret;
>  
>  /* Limit to first subflow, before the connection establishment */
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  ssk = __mptcp_nmpc_sk(msk);
>  if (IS_ERR(ssk)) {
>  ret = PTR_ERR(ssk);
> @@ -794,7 +793,7 @@ static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
>  ret = tcp_setsockopt(ssk, level, optname, optval, optlen);
>  
>  unlock:
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return ret;
>  }
>  
> @@ -846,7 +845,7 @@ static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
>  if (ret)
>  return ret;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  switch (optname) {
>  case TCP_INQ:
>  if (val < 0 || val > 1)
> @@ -889,7 +888,7 @@ static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
>  ret = -ENOPROTOOPT;
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return ret;
>  }
>  
> @@ -913,9 +912,9 @@ int mptcp_setsockopt(struct sock *sk, int level, int optname,
>  * is in TCP fallback, when TCP socket options are passed through
>  * to the one remaining subflow.
>  */
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  ssk = __mptcp_tcp_fallback(msk);
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  if (ssk)
>  return tcp_setsockopt(ssk, level, optname, optval, optlen);
>  
> @@ -938,7 +937,7 @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
>  struct sock *ssk;
>  int ret;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  ssk = msk->first;
>  if (ssk)
>  goto get;
> @@ -953,7 +952,7 @@ static int mptcp_getsockopt_first_sf_only(struct mptcp_sock *msk, int level, int
>  ret = tcp_getsockopt(ssk, level, optname, optval, optlen);
>  
>  out:
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return ret;
>  }
>  
> @@ -1124,7 +1123,7 @@ static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
>  
>  infoptr = optval + sfd.size_subflow_data;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> @@ -1137,7 +1136,7 @@ static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
>  tcp_get_info(ssk, &info);
>  
>  if (copy_to_user(infoptr, &info, sfd.size_user)) {
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return -EFAULT;
>  }
>  
> @@ -1147,7 +1146,7 @@ static int mptcp_getsockopt_tcpinfo(struct mptcp_sock *msk, char __user *optval,
>  }
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  
>  sfd.num_subflows = sfcount;
>  
> @@ -1216,7 +1215,7 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
>  
>  addrptr = optval + sfd.size_subflow_data;
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> @@ -1229,7 +1228,7 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
>  mptcp_get_sub_addrs(ssk, &a);
>  
>  if (copy_to_user(addrptr, &a, sfd.size_user)) {
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return -EFAULT;
>  }
>  
> @@ -1239,7 +1238,7 @@ static int mptcp_getsockopt_subflow_addrs(struct mptcp_sock *msk, char __user *o
>  }
>  }
>  
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  
>  sfd.num_subflows = sfcount;
>  
> @@ -1325,7 +1324,7 @@ static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optva
>  sizeof(struct mptcp_subflow_info));
>  tcpinfoptr = u64_to_user_ptr(mfi.tcp_info);
>  
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  mptcp_for_each_subflow(msk, subflow) {
>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
>  struct mptcp_subflow_info sfinfo;
> @@ -1355,7 +1354,7 @@ static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optva
>  tcpinfoptr += mfi.size_tcpinfo_user;
>  sfinfoptr += mfi.size_sfinfo_user;
>  }
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  
>  mfi.num_subflows = sfcount;
>  if (mptcp_put_full_info(&mfi, optval, copylen, optlen))
> @@ -1364,7 +1363,7 @@ static int mptcp_getsockopt_full_info(struct mptcp_sock *msk, char __user *optva
>  return 0;
>  
>  fail_release:
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  return -EFAULT;
>  }
>  
> @@ -1519,9 +1518,9 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
>  * is in TCP fallback, when socket options are passed through
>  * to the one remaining subflow.
>  */
> - lock_sock(sk);
> + sockopt_lock_sock(sk);
>  ssk = __mptcp_tcp_fallback(msk);
> - release_sock(sk);
> + sockopt_release_sock(sk);
>  if (ssk)
>  return tcp_getsockopt(ssk, level, optname, optval, option);
>  
> 
> -- 
> 2.43.0
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt
  2026-05-15  1:27   ` gang.yan
@ 2026-05-15  9:06     ` Paolo Abeni
  2026-05-15  9:39       ` gang.yan
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Abeni @ 2026-05-15  9:06 UTC (permalink / raw)
  To: gang.yan, MPTCP Linux; +Cc: Gang Yan

Hi,

On 5/15/26 3:27 AM, gang.yan@linux.dev wrote:
> May 9, 2026 at 2:44 PM, "Gang Yan" <gang.yan@linux.dev mailto:gang.yan@linux.dev?to=%22Gang%20Yan%22%20%3Cgang.yan%40linux.dev%3E > wrote:
>> From: Gang Yan <yangang@kylinos.cn>
>>
>> TCP and the core socket layer all use sockopt_lock_sock()
>> sockopt_release_sock() in their setsockopt and getsockopt handlers. It
>> is a BPF-aware wrapper that skips lock acquisition when invoked from a
>> BPF program, where the socket lock is already held.
>>
>> Using lock_sock_fast() on subflows requires extra care: the fast path
>> holds the socket spinlock with BH disabled, creating an atomic context
>> where sleeping is not allowed. Switching to lock_sock() avoids the
>> risk of accidentally introducing sleeping operations inside the
>> lock_sock_fast() critical section.
>>
>> Fixes: 24426654ed3a ("bpf: net: Avoid sk_setsockopt() taking sk lock when called from bpf")
>> Signed-off-by: Gang Yan <yangang@kylinos.cn>
>> ---
>>  net/mptcp/sockopt.c | 97 ++++++++++++++++++++++++++---------------------------
>>  1 file changed, 48 insertions(+), 49 deletions(-)
>>
>> diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
>> index 1cf608e7357b..57ffde0d0b87 100644
>> --- a/net/mptcp/sockopt.c
>> +++ b/net/mptcp/sockopt.c
>> @@ -72,12 +72,12 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
>>  struct mptcp_subflow_context *subflow;
>>  struct sock *sk = (struct sock *)msk;
>>  
>> - lock_sock(sk);
>> + sockopt_lock_sock(sk);
>>  sockopt_seq_inc(msk);
>>  
>>  mptcp_for_each_subflow(msk, subflow) {
>>  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
>> - bool slow = lock_sock_fast(ssk);
> 
> Hi, Maintainers:
> 
> Thanks for the review of sashiko. Let me address these two points.
> 
> First, I think this patch does not introduce issues for existing MPTCP code.
> Currently there is no scenario where a BPF program invokes the MPTCP
> setsockopt/getsockopt handler — BPF helpers (bpf_setsockopt) bypass the
> protocol-specific handler via __bpf_setsockopt(), and the cgroup BPF sockopt
> filter clears the BPF context before calling ops->setsockopt(). So
> has_current_bpf_ctx() is always false when the MPTCP handler executes. The
> commit's primary purpose, as stated in the commit message, is to replace
> lock_sock_fast() with lock_sock() to avoid sleeping within the
> lock_sock_fast() critical section.
> 
> 
> The subflow locking concern is valid for future BPF support.
> 
> That said, the observation about the subflow locking gap is a valid design
> concern. As sashiko pointed out, when the MPTCP setsockopt handler is
> eventually called from BPF context, the MPTCP meta socket (msk) is already
> locked externally, but the subflow sockets (ssk) are not. The internal
> delegation via sk_setsockopt(ssk, ...) or tcp_setsockopt(ssk, ...) calls
> sockopt_lock_sock(ssk) which skips the lock due to the per-thread
> has_current_bpf_ctx() flag, leaving ssk unprotected.
> 
> However, solving this is not straightforward. In BPF context we don't have a
> good way to call lock_sock() on the subflow — as sashiko noted in first point,
> lock_sock() unconditionally calls might_sleep(), and will cause 'sleeping in
> atomic context' problems in BPF context. So properly locking the subflow in
> BPF context remains an open question that needs to be addressed when MPTCP
> BPF setsockopt support is actually introduced.

I'm sorry for the late feedback. I think such concern is actually quite
relevant. Note that replacing

	bool slow = lock_sock_fast(ssk);

with:

	lock_sock(ssk);

does not change much WRT the sleeping issue.

I *think* the only viable path in the short term is making the sockopt
requiring subflow-level lock fail when called by BPF.

In a possible long term alternative could be delegating the
subflow-level changes to a workqueue, but that really looks invasive and
I would really like to avoid such option.

/P


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt
  2026-05-15  9:06     ` Paolo Abeni
@ 2026-05-15  9:39       ` gang.yan
  2026-05-15 10:50         ` Paolo Abeni
  0 siblings, 1 reply; 8+ messages in thread
From: gang.yan @ 2026-05-15  9:39 UTC (permalink / raw)
  To: Paolo Abeni, MPTCP Linux; +Cc: Gang Yan

May 15, 2026 at 5:06 PM, "Paolo Abeni" <pabeni@redhat.com mailto:pabeni@redhat.com?to=%22Paolo%20Abeni%22%20%3Cpabeni%40redhat.com%3E > wrote:


> 
> Hi,
> 
> On 5/15/26 3:27 AM, gang.yan@linux.dev wrote:
> 
> > 
> > May 9, 2026 at 2:44 PM, "Gang Yan" <gang.yan@linux.dev mailto:gang.yan@linux.dev?to=%22Gang%20Yan%22%20%3Cgang.yan%40linux.dev%3E > wrote:
> > 
> > > 
> > > From: Gang Yan <yangang@kylinos.cn>
> > > 
> > >  TCP and the core socket layer all use sockopt_lock_sock()
> > >  sockopt_release_sock() in their setsockopt and getsockopt handlers. It
> > >  is a BPF-aware wrapper that skips lock acquisition when invoked from a
> > >  BPF program, where the socket lock is already held.
> > > 
> > >  Using lock_sock_fast() on subflows requires extra care: the fast path
> > >  holds the socket spinlock with BH disabled, creating an atomic context
> > >  where sleeping is not allowed. Switching to lock_sock() avoids the
> > >  risk of accidentally introducing sleeping operations inside the
> > >  lock_sock_fast() critical section.
> > > 
> > >  Fixes: 24426654ed3a ("bpf: net: Avoid sk_setsockopt() taking sk lock when called from bpf")
> > >  Signed-off-by: Gang Yan <yangang@kylinos.cn>
> > >  ---
> > >  net/mptcp/sockopt.c | 97 ++++++++++++++++++++++++++---------------------------
> > >  1 file changed, 48 insertions(+), 49 deletions(-)
> > > 
> > >  diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
> > >  index 1cf608e7357b..57ffde0d0b87 100644
> > >  --- a/net/mptcp/sockopt.c
> > >  +++ b/net/mptcp/sockopt.c
> > >  @@ -72,12 +72,12 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
> > >  struct mptcp_subflow_context *subflow;
> > >  struct sock *sk = (struct sock *)msk;
> > >  
> > >  - lock_sock(sk);
> > >  + sockopt_lock_sock(sk);
> > >  sockopt_seq_inc(msk);
> > >  
> > >  mptcp_for_each_subflow(msk, subflow) {
> > >  struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
> > >  - bool slow = lock_sock_fast(ssk);
> > > 
> >  
> >  Hi, Maintainers:
> >  
> >  Thanks for the review of sashiko. Let me address these two points.
> >  
> >  First, I think this patch does not introduce issues for existing MPTCP code.
> >  Currently there is no scenario where a BPF program invokes the MPTCP
> >  setsockopt/getsockopt handler — BPF helpers (bpf_setsockopt) bypass the
> >  protocol-specific handler via __bpf_setsockopt(), and the cgroup BPF sockopt
> >  filter clears the BPF context before calling ops->setsockopt(). So
> >  has_current_bpf_ctx() is always false when the MPTCP handler executes. The
> >  commit's primary purpose, as stated in the commit message, is to replace
> >  lock_sock_fast() with lock_sock() to avoid sleeping within the
> >  lock_sock_fast() critical section.
> >  
> >  
> >  The subflow locking concern is valid for future BPF support.
> >  
> >  That said, the observation about the subflow locking gap is a valid design
> >  concern. As sashiko pointed out, when the MPTCP setsockopt handler is
> >  eventually called from BPF context, the MPTCP meta socket (msk) is already
> >  locked externally, but the subflow sockets (ssk) are not. The internal
> >  delegation via sk_setsockopt(ssk, ...) or tcp_setsockopt(ssk, ...) calls
> >  sockopt_lock_sock(ssk) which skips the lock due to the per-thread
> >  has_current_bpf_ctx() flag, leaving ssk unprotected.
> >  
> >  However, solving this is not straightforward. In BPF context we don't have a
> >  good way to call lock_sock() on the subflow — as sashiko noted in first point,
> >  lock_sock() unconditionally calls might_sleep(), and will cause 'sleeping in
> >  atomic context' problems in BPF context. So properly locking the subflow in
> >  BPF context remains an open question that needs to be addressed when MPTCP
> >  BPF setsockopt support is actually introduced.
> > 
Hi Paolo,

Thanks for your reply.

> I'm sorry for the late feedback. I think such concern is actually quite
> relevant. Note that replacing
> 
>  bool slow = lock_sock_fast(ssk);
> 
> with:
> 
>  lock_sock(ssk);
> 
> does not change much WRT the sleeping issue.

I'm not questioning your point, but just to confirm — when you mentioned the
"sleeping issue", are you referring to the BPF context?

In fact, this patch addresses an issue that occurs in non-BPF scenarios like
sashiko said in [1]:

https://sashiko.dev/#/patchset/20260420093343.16443-1-gang.yan@linux.dev

> 
> I *think* the only viable path in the short term is making the sockopt
> requiring subflow-level lock fail when called by BPF.
>
> In a possible long term alternative could be delegating the
> subflow-level changes to a workqueue, but that really looks invasive and
> I would really like to avoid such option.

I agree that the short-term approach sounds reasonable for now. We could
either implement the sockopt failure for BPF calls that require subflow-level
locking, or simply add a comment to mark this limitation clearly.

As far as I can see, MPTCP currently doesn’t have any real-world use cases
for setting sockopt via BPF anyway. The workqueue approach also seems too complex
and invasive for the current stage, so I’d prefer to avoid it for now.

WDYT?

Cherrs,
Gang

> 
> /P
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt
  2026-05-15  9:39       ` gang.yan
@ 2026-05-15 10:50         ` Paolo Abeni
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Abeni @ 2026-05-15 10:50 UTC (permalink / raw)
  To: gang.yan, MPTCP Linux; +Cc: Gang Yan

On 5/15/26 11:39 AM, gang.yan@linux.dev wrote:
> May 15, 2026 at 5:06 PM, "Paolo Abeni" <pabeni@redhat.com mailto:pabeni@redhat.com?to=%22Paolo%20Abeni%22%20%3Cpabeni%40redhat.com%3E > wrote:
>> On 5/15/26 3:27 AM, gang.yan@linux.dev wrote:
>>>  The subflow locking concern is valid for future BPF support.
>>>  
>>>  That said, the observation about the subflow locking gap is a valid design
>>>  concern. As sashiko pointed out, when the MPTCP setsockopt handler is
>>>  eventually called from BPF context, the MPTCP meta socket (msk) is already
>>>  locked externally, but the subflow sockets (ssk) are not. The internal
>>>  delegation via sk_setsockopt(ssk, ...) or tcp_setsockopt(ssk, ...) calls
>>>  sockopt_lock_sock(ssk) which skips the lock due to the per-thread
>>>  has_current_bpf_ctx() flag, leaving ssk unprotected.
>>>  
>>>  However, solving this is not straightforward. In BPF context we don't have a
>>>  good way to call lock_sock() on the subflow — as sashiko noted in first point,
>>>  lock_sock() unconditionally calls might_sleep(), and will cause 'sleeping in
>>>  atomic context' problems in BPF context. So properly locking the subflow in
>>>  BPF context remains an open question that needs to be addressed when MPTCP
>>>  BPF setsockopt support is actually introduced.
>>>
> Hi Paolo,
> 
> Thanks for your reply.
> 
>> I'm sorry for the late feedback. I think such concern is actually quite
>> relevant. Note that replacing
>>
>>  bool slow = lock_sock_fast(ssk);
>>
>> with:
>>
>>  lock_sock(ssk);
>>
>> does not change much WRT the sleeping issue.
> 
> I'm not questioning your point, but just to confirm — when you mentioned the
> "sleeping issue", are you referring to the BPF context?

Yes.

> In fact, this patch addresses an issue that occurs in non-BPF scenarios like
> sashiko said in [1]:
> 
> https://sashiko.dev/#/patchset/20260420093343.16443-1-gang.yan@linux.dev

I think that bit/chunk should in a separate patch targeting net.

>> I *think* the only viable path in the short term is making the sockopt
>> requiring subflow-level lock fail when called by BPF.
>>
>> In a possible long term alternative could be delegating the
>> subflow-level changes to a workqueue, but that really looks invasive and
>> I would really like to avoid such option.
> 
> I agree that the short-term approach sounds reasonable for now. We could
> either implement the sockopt failure for BPF calls that require subflow-level
> locking, or simply add a comment to mark this limitation clearly.

I suggest avoid the comment, because application/BPF programs will trip
on this. Returning an error will make it clear that is not working and
will prevent racy accesses.

> As far as I can see, MPTCP currently doesn’t have any real-world use cases
> for setting sockopt via BPF anyway. The workqueue approach also seems too complex
> and invasive for the current stage, so I’d prefer to avoid it for now.
> 
> WDYT?

I'm very fine with keeping this as simple as possible.

/P


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-15 10:51 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-09  6:44 [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock Gang Yan
2026-05-09  6:44 ` [PATCH mptcp-net v4 1/2] mptcp: use sockopt_lock(release)_sock in sockopt Gang Yan
2026-05-15  1:27   ` gang.yan
2026-05-15  9:06     ` Paolo Abeni
2026-05-15  9:39       ` gang.yan
2026-05-15 10:50         ` Paolo Abeni
2026-05-09  6:44 ` [PATCH mptcp-net v4 2/2] mptcp: use sockopt_ns_capable() in setsockopt congestion control Gang Yan
2026-05-09  7:47 ` [PATCH mptcp-net v4 0/2] mptcp: convert to sockopt_lock_sock MPTCP CI

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.