public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available
@ 2026-02-19  7:57 Fernando Fernandez Mancera
  2026-02-20  4:36 ` Allison Henderson
  2026-02-24  9:30 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Fernando Fernandez Mancera @ 2026-02-19  7:57 UTC (permalink / raw)
  To: netdev
  Cc: rds-devel, linux-rdma, gerd.rausch, horms, pabeni, kuba, edumazet,
	davem, allison.henderson, Fernando Fernandez Mancera,
	syzbot+5efae91f60932839f0a5

syzbot reported a recursive lock warning in rds_tcp_get_peer_sport() as
it calls inet6_getname() which acquires the socket lock that was already
held by __release_sock().

 kworker/u8:6/2985 is trying to acquire lock:
 ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
 ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533

 but task is already holding lock:
 ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
 ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: tcp_sock_set_cork+0x2c/0x2e0 net/ipv4/tcp.c:3694
   lock_sock_nested+0x48/0x100 net/core/sock.c:3780
   lock_sock include/net/sock.h:1709 [inline]
   inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533
   rds_tcp_get_peer_sport net/rds/tcp_listen.c:70 [inline]
   rds_tcp_conn_slots_available+0x288/0x470 net/rds/tcp_listen.c:149
   rds_recv_hs_exthdrs+0x60f/0x7c0 net/rds/recv.c:265
   rds_recv_incoming+0x9f6/0x12d0 net/rds/recv.c:389
   rds_tcp_data_recv+0x7f1/0xa40 net/rds/tcp_recv.c:243
   __tcp_read_sock+0x196/0x970 net/ipv4/tcp.c:1702
   rds_tcp_read_sock net/rds/tcp_recv.c:277 [inline]
   rds_tcp_data_ready+0x369/0x950 net/rds/tcp_recv.c:331
   tcp_rcv_established+0x19e9/0x2670 net/ipv4/tcp_input.c:6675
   tcp_v6_do_rcv+0x8eb/0x1ba0 net/ipv6/tcp_ipv6.c:1609
   sk_backlog_rcv include/net/sock.h:1185 [inline]
   __release_sock+0x1b8/0x3a0 net/core/sock.c:3213

Reading from the socket struct directly is safe from possible paths. For
rds_tcp_accept_one(), the socket has just been accepted and is not yet
exposed to concurrent access. For rds_tcp_conn_slots_available(), direct
access avoids the recursive deadlock seen during backlog processing
where the socket lock is already held from the __release_sock().

However, rds_tcp_conn_slots_available() is also called from the normal
softirq path via tcp_data_ready() where the lock is not held. This is
also safe because inet_dport is a stable 16 bits field. A READ_ONCE()
annotation as the value might be accessed lockless in a concurrent
access context.

Note that it is also safe to call rds_tcp_conn_slots_available() from
rds_conn_shutdown() because the fan-out is disabled.

Fixes: 9d27a0fb122f ("net/rds: Trigger rds_send_ping() more than once")
Reported-by: syzbot+5efae91f60932839f0a5@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=5efae91f60932839f0a5
Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de>
---
v2: clarified commit message and add a comment around
rds_conn_shutdown() path 
v3: used READ_ONCE() for lockless read and adjusted commit message
---
 net/rds/connection.c |  3 +++
 net/rds/tcp_listen.c | 28 +++++-----------------------
 2 files changed, 8 insertions(+), 23 deletions(-)

diff --git a/net/rds/connection.c b/net/rds/connection.c
index 185f73b01694..a542f94c0214 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -455,6 +455,9 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
 		rcu_read_unlock();
 	}
 
+	/* we do not hold the socket lock here but it is safe because
+	 * fan-out is disabled when calling conn_slots_available()
+	 */
 	if (conn->c_trans->conn_slots_available)
 		conn->c_trans->conn_slots_available(conn, false);
 }
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index 6fb5c928b8fd..dce7ac9d3197 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -59,30 +59,12 @@ void rds_tcp_keepalive(struct socket *sock)
 static int
 rds_tcp_get_peer_sport(struct socket *sock)
 {
-	union {
-		struct sockaddr_storage storage;
-		struct sockaddr addr;
-		struct sockaddr_in sin;
-		struct sockaddr_in6 sin6;
-	} saddr;
-	int sport;
-
-	if (kernel_getpeername(sock, &saddr.addr) >= 0) {
-		switch (saddr.addr.sa_family) {
-		case AF_INET:
-			sport = ntohs(saddr.sin.sin_port);
-			break;
-		case AF_INET6:
-			sport = ntohs(saddr.sin6.sin6_port);
-			break;
-		default:
-			sport = -1;
-		}
-	} else {
-		sport = -1;
-	}
+	struct sock *sk = sock->sk;
+
+	if (!sk)
+		return -1;
 
-	return sport;
+	return ntohs(READ_ONCE(inet_sk(sk)->inet_dport));
 }
 
 /* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH net v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available
  2026-02-19  7:57 [PATCH net v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available Fernando Fernandez Mancera
@ 2026-02-20  4:36 ` Allison Henderson
  2026-02-24  9:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Allison Henderson @ 2026-02-20  4:36 UTC (permalink / raw)
  To: fmancera@suse.de, netdev@vger.kernel.org
  Cc: linux-rdma@vger.kernel.org, davem@davemloft.net,
	rds-devel@oss.oracle.com,
	syzbot+5efae91f60932839f0a5@syzkaller.appspotmail.com,
	pabeni@redhat.com, horms@kernel.org, Gerd Rausch,
	edumazet@google.com, kuba@kernel.org

On Thu, 2026-02-19 at 08:57 +0100, Fernando Fernandez Mancera wrote:
> syzbot reported a recursive lock warning in rds_tcp_get_peer_sport() as
> it calls inet6_getname() which acquires the socket lock that was already
> held by __release_sock().
> 
>  kworker/u8:6/2985 is trying to acquire lock:
>  ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
>  ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533
> 
>  but task is already holding lock:
>  ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
>  ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: tcp_sock_set_cork+0x2c/0x2e0 net/ipv4/tcp.c:3694
>    lock_sock_nested+0x48/0x100 net/core/sock.c:3780
>    lock_sock include/net/sock.h:1709 [inline]
>    inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533
>    rds_tcp_get_peer_sport net/rds/tcp_listen.c:70 [inline]
>    rds_tcp_conn_slots_available+0x288/0x470 net/rds/tcp_listen.c:149
>    rds_recv_hs_exthdrs+0x60f/0x7c0 net/rds/recv.c:265
>    rds_recv_incoming+0x9f6/0x12d0 net/rds/recv.c:389
>    rds_tcp_data_recv+0x7f1/0xa40 net/rds/tcp_recv.c:243
>    __tcp_read_sock+0x196/0x970 net/ipv4/tcp.c:1702
>    rds_tcp_read_sock net/rds/tcp_recv.c:277 [inline]
>    rds_tcp_data_ready+0x369/0x950 net/rds/tcp_recv.c:331
>    tcp_rcv_established+0x19e9/0x2670 net/ipv4/tcp_input.c:6675
>    tcp_v6_do_rcv+0x8eb/0x1ba0 net/ipv6/tcp_ipv6.c:1609
>    sk_backlog_rcv include/net/sock.h:1185 [inline]
>    __release_sock+0x1b8/0x3a0 net/core/sock.c:3213
> 
> Reading from the socket struct directly is safe from possible paths. For
> rds_tcp_accept_one(), the socket has just been accepted and is not yet
> exposed to concurrent access. For rds_tcp_conn_slots_available(), direct
> access avoids the recursive deadlock seen during backlog processing
> where the socket lock is already held from the __release_sock().
> 
> However, rds_tcp_conn_slots_available() is also called from the normal
> softirq path via tcp_data_ready() where the lock is not held. This is
> also safe because inet_dport is a stable 16 bits field. A READ_ONCE()
> annotation as the value might be accessed lockless in a concurrent
> access context.
> 
> Note that it is also safe to call rds_tcp_conn_slots_available() from
> rds_conn_shutdown() because the fan-out is disabled.
> 
> Fixes: 9d27a0fb122f ("net/rds: Trigger rds_send_ping() more than once")
> Reported-by: syzbot+5efae91f60932839f0a5@syzkaller.appspotmail.com
> Closes: https://urldefense.com/v3/__https://syzkaller.appspot.com/bug?extid=5efae91f60932839f0a5__;!!ACWV5N9M2RV99hQ!J8DoczeYGSPP6QsBvwYQaxjJ0hVCVc6popPLkw4DNanrXOKGk9ilfqD3dHQ0xwgfwZe3Gatnp-8PmidwnWlezs0$ 
> Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de>

This looks good to me.  Thank you!
Reviewed-by: Allison Henderson <achender@kernel.org>

> ---
> v2: clarified commit message and add a comment around
> rds_conn_shutdown() path 
> v3: used READ_ONCE() for lockless read and adjusted commit message
> ---
>  net/rds/connection.c |  3 +++
>  net/rds/tcp_listen.c | 28 +++++-----------------------
>  2 files changed, 8 insertions(+), 23 deletions(-)
> 
> diff --git a/net/rds/connection.c b/net/rds/connection.c
> index 185f73b01694..a542f94c0214 100644
> --- a/net/rds/connection.c
> +++ b/net/rds/connection.c
> @@ -455,6 +455,9 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
>  		rcu_read_unlock();
>  	}
>  
> +	/* we do not hold the socket lock here but it is safe because
> +	 * fan-out is disabled when calling conn_slots_available()
> +	 */
>  	if (conn->c_trans->conn_slots_available)
>  		conn->c_trans->conn_slots_available(conn, false);
>  }
> diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
> index 6fb5c928b8fd..dce7ac9d3197 100644
> --- a/net/rds/tcp_listen.c
> +++ b/net/rds/tcp_listen.c
> @@ -59,30 +59,12 @@ void rds_tcp_keepalive(struct socket *sock)
>  static int
>  rds_tcp_get_peer_sport(struct socket *sock)
>  {
> -	union {
> -		struct sockaddr_storage storage;
> -		struct sockaddr addr;
> -		struct sockaddr_in sin;
> -		struct sockaddr_in6 sin6;
> -	} saddr;
> -	int sport;
> -
> -	if (kernel_getpeername(sock, &saddr.addr) >= 0) {
> -		switch (saddr.addr.sa_family) {
> -		case AF_INET:
> -			sport = ntohs(saddr.sin.sin_port);
> -			break;
> -		case AF_INET6:
> -			sport = ntohs(saddr.sin6.sin6_port);
> -			break;
> -		default:
> -			sport = -1;
> -		}
> -	} else {
> -		sport = -1;
> -	}
> +	struct sock *sk = sock->sk;
> +
> +	if (!sk)
> +		return -1;
>  
> -	return sport;
> +	return ntohs(READ_ONCE(inet_sk(sk)->inet_dport));
>  }
>  
>  /* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH net v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available
  2026-02-19  7:57 [PATCH net v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available Fernando Fernandez Mancera
  2026-02-20  4:36 ` Allison Henderson
@ 2026-02-24  9:30 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-02-24  9:30 UTC (permalink / raw)
  To: Fernando Fernandez Mancera
  Cc: netdev, rds-devel, linux-rdma, gerd.rausch, horms, pabeni, kuba,
	edumazet, davem, allison.henderson, syzbot+5efae91f60932839f0a5

Hello:

This patch was applied to netdev/net.git (main)
by Paolo Abeni <pabeni@redhat.com>:

On Thu, 19 Feb 2026 08:57:38 +0100 you wrote:
> syzbot reported a recursive lock warning in rds_tcp_get_peer_sport() as
> it calls inet6_getname() which acquires the socket lock that was already
> held by __release_sock().
> 
>  kworker/u8:6/2985 is trying to acquire lock:
>  ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
>  ffff88807a07aa20 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: inet6_getname+0x15d/0x650 net/ipv6/af_inet6.c:533
> 
> [...]

Here is the summary with links:
  - [net,v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available
    https://git.kernel.org/netdev/net/c/021fd0f87004

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-02-24  9:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-19  7:57 [PATCH net v3] net/rds: fix recursive lock in rds_tcp_conn_slots_available Fernando Fernandez Mancera
2026-02-20  4:36 ` Allison Henderson
2026-02-24  9:30 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox