* [PATCH net v3 1/2] tcp: call sk_data_ready() after listener migration
2026-04-21 12:31 [PATCH net v3 0/2] tcp: fix listener wakeup after reuseport migration Zhenzhong Wu
@ 2026-04-21 12:31 ` Zhenzhong Wu
2026-04-21 12:31 ` [PATCH net v3 2/2] selftests/bpf: check epoll readiness during reuseport migration Zhenzhong Wu
1 sibling, 0 replies; 3+ messages in thread
From: Zhenzhong Wu @ 2026-04-21 12:31 UTC (permalink / raw)
To: netdev
Cc: edumazet, ncardwell, kuniyu, davem, dsahern, kuba, pabeni, horms,
shuah, tamird, linux-kernel, linux-kselftest, Zhenzhong Wu,
stable
When inet_csk_listen_stop() migrates an established child socket from
a closing listener to another socket in the same SO_REUSEPORT group,
the target listener gets a new accept-queue entry via
inet_csk_reqsk_queue_add(), but that path never notifies the target
listener's waiters. A nonblocking accept() still works because it
checks the queue directly, but poll()/epoll_wait() waiters and
blocking accept() callers can also remain asleep indefinitely.
Call READ_ONCE(nsk->sk_data_ready)(nsk) after a successful migration
in inet_csk_listen_stop().
However, after inet_csk_reqsk_queue_add() succeeds, the ref acquired
in reuseport_migrate_sock() is effectively transferred to
nreq->rsk_listener. Another CPU can then dequeue nreq via accept()
or listener shutdown, hit reqsk_put(), and drop that listener ref.
Since listeners are SOCK_RCU_FREE, wrap the post-queue_add()
dereferences of nsk in rcu_read_lock()/rcu_read_unlock(), which also
covers the existing sock_net(nsk) access in that path.
The reqsk_timer_handler() path does not need the same changes for two
reasons: half-open requests become readable only after the final ACK,
where tcp_child_process() already wakes the listener; and once nreq is
visible via inet_ehash_insert(), the success path no longer touches
nsk directly.
Fixes: 54b92e841937 ("tcp: Migrate TCP_ESTABLISHED/TCP_SYN_RECV sockets in accept queues.")
Cc: stable@vger.kernel.org
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Zhenzhong Wu <jt26wzz@gmail.com>
---
net/ipv4/inet_connection_sock.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 4ac3ae1bc..928654c34 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -1479,16 +1479,19 @@ void inet_csk_listen_stop(struct sock *sk)
if (nreq) {
refcount_set(&nreq->rsk_refcnt, 1);
+ rcu_read_lock();
if (inet_csk_reqsk_queue_add(nsk, nreq, child)) {
__NET_INC_STATS(sock_net(nsk),
LINUX_MIB_TCPMIGRATEREQSUCCESS);
reqsk_migrate_reset(req);
+ READ_ONCE(nsk->sk_data_ready)(nsk);
} else {
__NET_INC_STATS(sock_net(nsk),
LINUX_MIB_TCPMIGRATEREQFAILURE);
reqsk_migrate_reset(nreq);
__reqsk_free(nreq);
}
+ rcu_read_unlock();
/* inet_csk_reqsk_queue_add() has already
* called inet_child_forget() on failure case.
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH net v3 2/2] selftests/bpf: check epoll readiness during reuseport migration
2026-04-21 12:31 [PATCH net v3 0/2] tcp: fix listener wakeup after reuseport migration Zhenzhong Wu
2026-04-21 12:31 ` [PATCH net v3 1/2] tcp: call sk_data_ready() after listener migration Zhenzhong Wu
@ 2026-04-21 12:31 ` Zhenzhong Wu
1 sibling, 0 replies; 3+ messages in thread
From: Zhenzhong Wu @ 2026-04-21 12:31 UTC (permalink / raw)
To: netdev
Cc: edumazet, ncardwell, kuniyu, davem, dsahern, kuba, pabeni, horms,
shuah, tamird, linux-kernel, linux-kselftest, Zhenzhong Wu
Inside migrate_dance(), add epoll checks around shutdown() to
verify that the target listener is not ready before shutdown()
and becomes ready immediately after shutdown() triggers migration.
Cover TCP_ESTABLISHED and TCP_SYN_RECV. Exclude TCP_NEW_SYN_RECV
as it depends on later handshake completion.
Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
Signed-off-by: Zhenzhong Wu <jt26wzz@gmail.com>
---
.../bpf/prog_tests/migrate_reuseport.c | 46 ++++++++++++++++---
1 file changed, 40 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c b/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
index 653b0a20f..6180a79a7 100644
--- a/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
+++ b/tools/testing/selftests/bpf/prog_tests/migrate_reuseport.c
@@ -7,24 +7,29 @@
* 3. call listen() for 1 server socket. (migration target)
* 4. update a map to migrate all child sockets
* to the last server socket (migrate_map[cookie] = 4)
- * 5. call shutdown() for first 4 server sockets
+ * 5. for TCP_ESTABLISHED and TCP_SYN_RECV cases, verify via epoll
+ * that the last server socket is not ready before migration.
+ * 6. call shutdown() for first 4 server sockets
* and migrate the requests in the accept queue
* to the last server socket.
- * 6. call listen() for the second server socket.
- * 7. call shutdown() for the last server
+ * 7. for TCP_ESTABLISHED and TCP_SYN_RECV cases, verify via epoll
+ * that the last server socket is ready after migration.
+ * 8. call listen() for the second server socket.
+ * 9. call shutdown() for the last server
* and migrate the requests in the accept queue
* to the second server socket.
- * 8. call listen() for the last server.
- * 9. call shutdown() for the second server
+ * 10. call listen() for the last server.
+ * 11. call shutdown() for the second server
* and migrate the requests in the accept queue
* to the last server socket.
- * 10. call accept() for the last server socket.
+ * 12. call accept() for the last server socket.
*
* Author: Kuniyuki Iwashima <kuniyu@amazon.co.jp>
*/
#include <bpf/bpf.h>
#include <bpf/libbpf.h>
+#include <sys/epoll.h>
#include "test_progs.h"
#include "test_migrate_reuseport.skel.h"
@@ -350,8 +355,28 @@ static int update_maps(struct migrate_reuseport_test_case *test_case,
static int migrate_dance(struct migrate_reuseport_test_case *test_case)
{
+ struct epoll_event ev = {
+ .events = EPOLLIN,
+ };
+ int epoll = -1, nfds;
int i, err;
+ if (test_case->state != BPF_TCP_NEW_SYN_RECV) {
+ epoll = epoll_create1(0);
+ if (!ASSERT_NEQ(epoll, -1, "epoll_create1"))
+ return -1;
+
+ ev.data.fd = test_case->servers[MIGRATED_TO];
+ if (!ASSERT_OK(epoll_ctl(epoll, EPOLL_CTL_ADD,
+ test_case->servers[MIGRATED_TO], &ev),
+ "epoll_ctl"))
+ goto close_epoll;
+
+ nfds = epoll_wait(epoll, &ev, 1, 0);
+ if (!ASSERT_EQ(nfds, 0, "epoll_wait 1"))
+ goto close_epoll;
+ }
+
/* Migrate TCP_ESTABLISHED and TCP_SYN_RECV requests
* to the last listener based on eBPF.
*/
@@ -365,6 +390,15 @@ static int migrate_dance(struct migrate_reuseport_test_case *test_case)
if (test_case->state == BPF_TCP_NEW_SYN_RECV)
return 0;
+ nfds = epoll_wait(epoll, &ev, 1, 0);
+ if (!ASSERT_EQ(nfds, 1, "epoll_wait 2")) {
+close_epoll:
+ close(epoll);
+ return -1;
+ }
+
+ close(epoll);
+
/* Note that we use the second listener instead of the
* first one here.
*
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread