netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v4 0/3] net: Avoid ehash lookup races
@ 2025-09-20 10:59 xuanqiang.luo
  2025-09-20 10:59 ` [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: xuanqiang.luo @ 2025-09-20 10:59 UTC (permalink / raw)
  To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

After replacing R/W locks with RCU in commit 3ab5aee7fe84 ("net: Convert
TCP & DCCP hash tables to use RCU / hlist_nulls"), a race window emerged
during the switch from reqsk/sk to sk/tw.

Now that both timewait sock (tw) and full sock (sk) reside on the same
ehash chain, it is appropriate to introduce hlist_nulls replace
operations, to eliminate the race conditions caused by this window.

Before this series of patches, I previously sent another version of the
patch, attempting to avoid the issue using a lock mechanism. However, it
seems there are some problems with that approach now, so I've switched to
the "replace" method in the current patches to resolve the issue.
For details, refer to:
https://lore.kernel.org/netdev/20250903024406.2418362-1-xuanqiang.luo@linux.dev/

Before I encountered this type of issue recently, I found there had been
several historical discussions about it. Therefore, I'm adding this
background information for those interested to reference:
1. https://lore.kernel.org/lkml/20230118015941.1313-1-kerneljasonxing@gmail.com/
2. https://lore.kernel.org/netdev/20230606064306.9192-1-duanmuquan@baidu.com/

---

Changes:
  v4:
    * Patch 1
        * Use WRITE_ONCE() for ->next in __hlist_nulls_replace_rcu(), and add
          why in the commit message.
        * Remove the node hash check in hlist_nulls_replace_init_rcu() to avoid
          redundancy. Also remove the return value, as it serves no purpose in
          this patch series.
    * Patch 3
        * Remove the check of hlist_nulls_replace_init_rcu() return value in
          inet_twsk_hashdance_schedule() as it is unnecessary.
          Thanks to Kuni for clarifying this.

  v3: https://lore.kernel.org/all/20250916103054.719584-1-xuanqiang.luo@linux.dev/
    * Add more background information on this type of issue to the letter cover.

  v2: https://lore.kernel.org/all/20250916064614.605075-1-xuanqiang.luo@linux.dev/
    * Patch 1
        * Use WRITE_ONCE() to initialize old->pprev.
    * Patch 2&3
        * Optimize sk hashed check. Thanks Kuni for pointing it out!

  v1: https://lore.kernel.org/all/20250915070308.111816-1-xuanqiang.luo@linux.dev/

Xuanqiang Luo (3):
  rculist: Add __hlist_nulls_replace_rcu() and
    hlist_nulls_replace_init_rcu()
  inet: Avoid ehash lookup race in inet_ehash_insert()
  inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()

 include/linux/rculist_nulls.h | 52 +++++++++++++++++++++++++++++++++++
 include/net/sock.h            | 23 ++++++++++++++++
 net/ipv4/inet_hashtables.c    |  4 ++-
 net/ipv4/inet_timewait_sock.c | 13 +++------
 4 files changed, 82 insertions(+), 10 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-20 10:59 [PATCH net-next v4 0/3] net: Avoid ehash lookup races xuanqiang.luo
@ 2025-09-20 10:59 ` xuanqiang.luo
  2025-09-23  0:19   ` Kuniyuki Iwashima
  2025-09-20 10:59 ` [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
  2025-09-20 10:59 ` [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
  2 siblings, 1 reply; 12+ messages in thread
From: xuanqiang.luo @ 2025-09-20 10:59 UTC (permalink / raw)
  To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Add two functions to atomically replace RCU-protected hlist_nulls entries.

Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as mentioned in
the patch below:
efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")
860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for hlist_nulls")

Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
---
 include/linux/rculist_nulls.h | 52 +++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
index 89186c499dd4..d86331ce22c4 100644
--- a/include/linux/rculist_nulls.h
+++ b/include/linux/rculist_nulls.h
@@ -152,6 +152,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
 	n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
 }
 
+/**
+ * __hlist_nulls_replace_rcu - replace an old entry by a new one
+ * @old: the element to be replaced
+ * @new: the new element to insert
+ *
+ * Description:
+ * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
+ * permitting racing traversals.
+ *
+ * The caller must take whatever precautions are necessary (such as holding
+ * appropriate locks) to avoid racing with another list-mutation primitive, such
+ * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
+ * list.  However, it is perfectly legal to run concurrently with the _rcu
+ * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
+ */
+static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
+					     struct hlist_nulls_node *new)
+{
+	struct hlist_nulls_node *next = old->next;
+
+	WRITE_ONCE(new->next, next);
+	WRITE_ONCE(new->pprev, old->pprev);
+	rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new);
+	if (!is_a_nulls(next))
+		WRITE_ONCE(new->next->pprev, &new->next);
+}
+
+/**
+ * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
+ * initialize the old
+ * @old: the element to be replaced
+ * @new: the new element to insert
+ *
+ * Description:
+ * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
+ * permitting racing traversals, and reinitialize the old entry.
+ *
+ * Note: @old should be hashed.
+ *
+ * The caller must take whatever precautions are necessary (such as holding
+ * appropriate locks) to avoid racing with another list-mutation primitive, such
+ * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
+ * list. However, it is perfectly legal to run concurrently with the _rcu
+ * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
+ */
+static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
+						struct hlist_nulls_node *new)
+{
+	__hlist_nulls_replace_rcu(old, new);
+	WRITE_ONCE(old->pprev, NULL);
+}
+
 /**
  * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
  * @tpos:	the type * to use as a loop cursor.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert()
  2025-09-20 10:59 [PATCH net-next v4 0/3] net: Avoid ehash lookup races xuanqiang.luo
  2025-09-20 10:59 ` [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
@ 2025-09-20 10:59 ` xuanqiang.luo
  2025-09-23  0:23   ` Kuniyuki Iwashima
  2025-09-20 10:59 ` [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
  2 siblings, 1 reply; 12+ messages in thread
From: xuanqiang.luo @ 2025-09-20 10:59 UTC (permalink / raw)
  To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Since ehash lookups are lockless, if one CPU performs a lookup while
another concurrently deletes and inserts (removing reqsk and inserting sk),
the lookup may fail to find the socket, an RST may be sent.

The call trace map is drawn as follows:
   CPU 0                           CPU 1
   -----                           -----
				inet_ehash_insert()
                                spin_lock()
                                sk_nulls_del_node_init_rcu(osk)
__inet_lookup_established()
	(lookup failed)
                                __sk_nulls_add_node_rcu(sk, list)
                                spin_unlock()

As both deletion and insertion operate on the same ehash chain, this patch
introduces two new sk_nulls_replace_* helper functions to implement atomic
replacement.

Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
---
 include/net/sock.h         | 23 +++++++++++++++++++++++
 net/ipv4/inet_hashtables.c |  4 +++-
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 0fd465935334..e709376eaf0a 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -854,6 +854,29 @@ static inline bool sk_nulls_del_node_init_rcu(struct sock *sk)
 	return rc;
 }
 
+static inline bool __sk_nulls_replace_node_init_rcu(struct sock *old,
+						    struct sock *new)
+{
+	if (sk_hashed(old)) {
+		hlist_nulls_replace_init_rcu(&old->sk_nulls_node,
+					     &new->sk_nulls_node);
+		return true;
+	}
+	return false;
+}
+
+static inline bool sk_nulls_replace_node_init_rcu(struct sock *old,
+						  struct sock *new)
+{
+	bool rc = __sk_nulls_replace_node_init_rcu(old, new);
+
+	if (rc) {
+		WARN_ON(refcount_read(&old->sk_refcnt) == 1);
+		__sock_put(old);
+	}
+	return rc;
+}
+
 static inline void __sk_add_node(struct sock *sk, struct hlist_head *list)
 {
 	hlist_add_head(&sk->sk_node, list);
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index ef4ccfd46ff6..83c9ec625419 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -685,7 +685,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	spin_lock(lock);
 	if (osk) {
 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
-		ret = sk_nulls_del_node_init_rcu(osk);
+		ret = sk_nulls_replace_node_init_rcu(osk, sk);
+		goto unlock;
 	} else if (found_dup_sk) {
 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
 		if (*found_dup_sk)
@@ -695,6 +696,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	if (ret)
 		__sk_nulls_add_node_rcu(sk, list);
 
+unlock:
 	spin_unlock(lock);
 
 	return ret;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
  2025-09-20 10:59 [PATCH net-next v4 0/3] net: Avoid ehash lookup races xuanqiang.luo
  2025-09-20 10:59 ` [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
  2025-09-20 10:59 ` [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
@ 2025-09-20 10:59 ` xuanqiang.luo
  2025-09-23  0:45   ` Kuniyuki Iwashima
  2 siblings, 1 reply; 12+ messages in thread
From: xuanqiang.luo @ 2025-09-20 10:59 UTC (permalink / raw)
  To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Since ehash lookups are lockless, if another CPU is converting sk to tw
concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause
lookup failure.

The call trace map is drawn as follows:
   CPU 0                                CPU 1
   -----                                -----
				     inet_twsk_hashdance_schedule()
				     spin_lock()
				     inet_twsk_add_node_rcu(tw, ...)
__inet_lookup_established()
(find tw, failure due to tw_refcnt = 0)
				     __sk_nulls_del_node_init_rcu(sk)
				     refcount_set(&tw->tw_refcnt, 3)
				     spin_unlock()

By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after
setting tw_refcnt, we ensure that tw is either fully initialized or not
visible to other CPUs, eliminating the race.

It's worth noting that we replace under lock_sock(), so no need to check if sk
is hashed. Thanks to Kuniyuki Iwashima!

Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls")
Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
---
 net/ipv4/inet_timewait_sock.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 5b5426b8ee92..bb98888584a8 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
 	spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
 	struct inet_bind_hashbucket *bhead, *bhead2;
 
-	/* Step 1: Put TW into bind hash. Original socket stays there too.
+	/* Put TW into bind hash. Original socket stays there too.
 	   Note, that any socket with inet->num != 0 MUST be bound in
 	   binding cache, even if it is closed.
 	 */
@@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
 
 	spin_lock(lock);
 
-	/* Step 2: Hash TW into tcp ehash chain */
-	inet_twsk_add_node_rcu(tw, &ehead->chain);
-
-	/* Step 3: Remove SK from hash chain */
-	if (__sk_nulls_del_node_init_rcu(sk))
-		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
-
-
 	/* Ensure above writes are committed into memory before updating the
 	 * refcount.
 	 * Provides ordering vs later refcount_inc().
@@ -162,6 +154,9 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
 	 */
 	refcount_set(&tw->tw_refcnt, 3);
 
+	hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
+	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+
 	inet_twsk_schedule(tw, timeo);
 
 	spin_unlock(lock);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-20 10:59 ` [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
@ 2025-09-23  0:19   ` Kuniyuki Iwashima
  2025-09-23  1:59     ` luoxuanqiang
  0 siblings, 1 reply; 12+ messages in thread
From: Kuniyuki Iwashima @ 2025-09-23  0:19 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>
> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>
> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as mentioned in
> the patch below:
> efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")
> 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for hlist_nulls")
>
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> ---
>  include/linux/rculist_nulls.h | 52 +++++++++++++++++++++++++++++++++++
>  1 file changed, 52 insertions(+)
>
> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> index 89186c499dd4..d86331ce22c4 100644
> --- a/include/linux/rculist_nulls.h
> +++ b/include/linux/rculist_nulls.h
> @@ -152,6 +152,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>         n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>  }
>
> +/**
> + * __hlist_nulls_replace_rcu - replace an old entry by a new one

nit: '__' is not needed as there is not no-'__' version.


> + * @old: the element to be replaced
> + * @new: the new element to insert
> + *
> + * Description:
> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> + * permitting racing traversals.
> + *
> + * The caller must take whatever precautions are necessary (such as holding
> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> + * list.  However, it is perfectly legal to run concurrently with the _rcu
> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> + */
> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> +                                            struct hlist_nulls_node *new)
> +{
> +       struct hlist_nulls_node *next = old->next;
> +
> +       WRITE_ONCE(new->next, next);
> +       WRITE_ONCE(new->pprev, old->pprev);
> +       rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new);
> +       if (!is_a_nulls(next))
> +               WRITE_ONCE(new->next->pprev, &new->next);
> +}
> +
> +/**
> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
> + * initialize the old
> + * @old: the element to be replaced
> + * @new: the new element to insert
> + *
> + * Description:
> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> + * permitting racing traversals, and reinitialize the old entry.
> + *
> + * Note: @old should be hashed.

nit: s/should/must/

> + *
> + * The caller must take whatever precautions are necessary (such as holding
> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> + * list. However, it is perfectly legal to run concurrently with the _rcu
> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> + */
> +static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
> +                                               struct hlist_nulls_node *new)
> +{
> +       __hlist_nulls_replace_rcu(old, new);
> +       WRITE_ONCE(old->pprev, NULL);
> +}
> +
>  /**
>   * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
>   * @tpos:      the type * to use as a loop cursor.
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert()
  2025-09-20 10:59 ` [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
@ 2025-09-23  0:23   ` Kuniyuki Iwashima
  2025-09-23  2:00     ` luoxuanqiang
  0 siblings, 1 reply; 12+ messages in thread
From: Kuniyuki Iwashima @ 2025-09-23  0:23 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>
> Since ehash lookups are lockless, if one CPU performs a lookup while
> another concurrently deletes and inserts (removing reqsk and inserting sk),
> the lookup may fail to find the socket, an RST may be sent.
>
> The call trace map is drawn as follows:
>    CPU 0                           CPU 1
>    -----                           -----
>                                 inet_ehash_insert()
>                                 spin_lock()
>                                 sk_nulls_del_node_init_rcu(osk)
> __inet_lookup_established()
>         (lookup failed)
>                                 __sk_nulls_add_node_rcu(sk, list)
>                                 spin_unlock()
>
> As both deletion and insertion operate on the same ehash chain, this patch
> introduces two new sk_nulls_replace_* helper functions to implement atomic
> replacement.
>
> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> ---
>  include/net/sock.h         | 23 +++++++++++++++++++++++
>  net/ipv4/inet_hashtables.c |  4 +++-
>  2 files changed, 26 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 0fd465935334..e709376eaf0a 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -854,6 +854,29 @@ static inline bool sk_nulls_del_node_init_rcu(struct sock *sk)
>         return rc;
>  }
>
> +static inline bool __sk_nulls_replace_node_init_rcu(struct sock *old,
> +                                                   struct sock *new)

nit: This can be inlined into sk_nulls_replace_node_init_rcu() as
there is no caller of __sk_nulls_replace_node_init_rcu().


> +{
> +       if (sk_hashed(old)) {
> +               hlist_nulls_replace_init_rcu(&old->sk_nulls_node,
> +                                            &new->sk_nulls_node);
> +               return true;
> +       }
> +       return false;
> +}
> +
> +static inline bool sk_nulls_replace_node_init_rcu(struct sock *old,
> +                                                 struct sock *new)
> +{
> +       bool rc = __sk_nulls_replace_node_init_rcu(old, new);
> +
> +       if (rc) {
> +               WARN_ON(refcount_read(&old->sk_refcnt) == 1);

nit: DEBUG_NET_WARN_ON_ONCE() would be better as
this is "paranoid" as commented in sk_del_node_init() etc.


> +               __sock_put(old);
> +       }
> +       return rc;
> +}
> +
>  static inline void __sk_add_node(struct sock *sk, struct hlist_head *list)
>  {
>         hlist_add_head(&sk->sk_node, list);
> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
> index ef4ccfd46ff6..83c9ec625419 100644
> --- a/net/ipv4/inet_hashtables.c
> +++ b/net/ipv4/inet_hashtables.c
> @@ -685,7 +685,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         spin_lock(lock);
>         if (osk) {
>                 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
> -               ret = sk_nulls_del_node_init_rcu(osk);
> +               ret = sk_nulls_replace_node_init_rcu(osk, sk);
> +               goto unlock;
>         } else if (found_dup_sk) {
>                 *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
>                 if (*found_dup_sk)
> @@ -695,6 +696,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>         if (ret)
>                 __sk_nulls_add_node_rcu(sk, list);
>
> +unlock:
>         spin_unlock(lock);
>
>         return ret;
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
  2025-09-20 10:59 ` [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
@ 2025-09-23  0:45   ` Kuniyuki Iwashima
  2025-09-23  2:07     ` luoxuanqiang
  0 siblings, 1 reply; 12+ messages in thread
From: Kuniyuki Iwashima @ 2025-09-23  0:45 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>
> Since ehash lookups are lockless, if another CPU is converting sk to tw
> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause
> lookup failure.
>
> The call trace map is drawn as follows:
>    CPU 0                                CPU 1
>    -----                                -----
>                                      inet_twsk_hashdance_schedule()
>                                      spin_lock()
>                                      inet_twsk_add_node_rcu(tw, ...)
> __inet_lookup_established()
> (find tw, failure due to tw_refcnt = 0)
>                                      __sk_nulls_del_node_init_rcu(sk)
>                                      refcount_set(&tw->tw_refcnt, 3)
>                                      spin_unlock()
>
> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after
> setting tw_refcnt, we ensure that tw is either fully initialized or not
> visible to other CPUs, eliminating the race.
>
> It's worth noting that we replace under lock_sock(), so no need to check if sk
> is hashed. Thanks to Kuniyuki Iwashima!
>
> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls")
> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>

This is not needed.  A pure review does not deserve Suggested-by.
This is used when someone suggests changing the core idea of
the patch.


> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> ---
>  net/ipv4/inet_timewait_sock.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
> index 5b5426b8ee92..bb98888584a8 100644
> --- a/net/ipv4/inet_timewait_sock.c
> +++ b/net/ipv4/inet_timewait_sock.c
> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>         spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
>         struct inet_bind_hashbucket *bhead, *bhead2;
>
> -       /* Step 1: Put TW into bind hash. Original socket stays there too.
> +       /* Put TW into bind hash. Original socket stays there too.
>            Note, that any socket with inet->num != 0 MUST be bound in
>            binding cache, even if it is closed.
>          */
> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>
>         spin_lock(lock);
>
> -       /* Step 2: Hash TW into tcp ehash chain */
> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
> -
> -       /* Step 3: Remove SK from hash chain */
> -       if (__sk_nulls_del_node_init_rcu(sk))
> -               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
> -
> -
>         /* Ensure above writes are committed into memory before updating the
>          * refcount.
>          * Provides ordering vs later refcount_inc().
> @@ -162,6 +154,9 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>          */
>         refcount_set(&tw->tw_refcnt, 3);

I discussed this series with Eric last week, and he pointed out
(thanks!) that we need to be careful here about memory barrier.

refcount_set() is just WRITE_ONCE() and thus can be reordered,
and twsk could be published with 0 refcnt, resulting in another RST.


>
> +       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
> +       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
> +
>         inet_twsk_schedule(tw, timeo);
>
>         spin_unlock(lock);
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-23  0:19   ` Kuniyuki Iwashima
@ 2025-09-23  1:59     ` luoxuanqiang
  0 siblings, 0 replies; 12+ messages in thread
From: luoxuanqiang @ 2025-09-23  1:59 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo


在 2025/9/23 08:19, Kuniyuki Iwashima 写道:
> On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>
>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>
>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as mentioned in
>> the patch below:
>> efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")
>> 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for hlist_nulls")
>>
>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>> ---
>>   include/linux/rculist_nulls.h | 52 +++++++++++++++++++++++++++++++++++
>>   1 file changed, 52 insertions(+)
>>
>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
>> index 89186c499dd4..d86331ce22c4 100644
>> --- a/include/linux/rculist_nulls.h
>> +++ b/include/linux/rculist_nulls.h
>> @@ -152,6 +152,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>>          n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>>   }
>>
>> +/**
>> + * __hlist_nulls_replace_rcu - replace an old entry by a new one
> nit: '__' is not needed as there is not no-'__' version.
>
>
>> + * @old: the element to be replaced
>> + * @new: the new element to insert
>> + *
>> + * Description:
>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>> + * permitting racing traversals.
>> + *
>> + * The caller must take whatever precautions are necessary (such as holding
>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>> + */
>> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
>> +                                            struct hlist_nulls_node *new)
>> +{
>> +       struct hlist_nulls_node *next = old->next;
>> +
>> +       WRITE_ONCE(new->next, next);
>> +       WRITE_ONCE(new->pprev, old->pprev);
>> +       rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new);
>> +       if (!is_a_nulls(next))
>> +               WRITE_ONCE(new->next->pprev, &new->next);
>> +}
>> +
>> +/**
>> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
>> + * initialize the old
>> + * @old: the element to be replaced
>> + * @new: the new element to insert
>> + *
>> + * Description:
>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>> + * permitting racing traversals, and reinitialize the old entry.
>> + *
>> + * Note: @old should be hashed.
> nit: s/should/must/

Will fix them in the next version!

Thanks, Kuniyuki!

>> + *
>> + * The caller must take whatever precautions are necessary (such as holding
>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>> + * list. However, it is perfectly legal to run concurrently with the _rcu
>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>> + */
>> +static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
>> +                                               struct hlist_nulls_node *new)
>> +{
>> +       __hlist_nulls_replace_rcu(old, new);
>> +       WRITE_ONCE(old->pprev, NULL);
>> +}
>> +
>>   /**
>>    * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
>>    * @tpos:      the type * to use as a loop cursor.
>> --
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert()
  2025-09-23  0:23   ` Kuniyuki Iwashima
@ 2025-09-23  2:00     ` luoxuanqiang
  0 siblings, 0 replies; 12+ messages in thread
From: luoxuanqiang @ 2025-09-23  2:00 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo


在 2025/9/23 08:23, Kuniyuki Iwashima 写道:
> On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>
>> Since ehash lookups are lockless, if one CPU performs a lookup while
>> another concurrently deletes and inserts (removing reqsk and inserting sk),
>> the lookup may fail to find the socket, an RST may be sent.
>>
>> The call trace map is drawn as follows:
>>     CPU 0                           CPU 1
>>     -----                           -----
>>                                  inet_ehash_insert()
>>                                  spin_lock()
>>                                  sk_nulls_del_node_init_rcu(osk)
>> __inet_lookup_established()
>>          (lookup failed)
>>                                  __sk_nulls_add_node_rcu(sk, list)
>>                                  spin_unlock()
>>
>> As both deletion and insertion operate on the same ehash chain, this patch
>> introduces two new sk_nulls_replace_* helper functions to implement atomic
>> replacement.
>>
>> Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>> ---
>>   include/net/sock.h         | 23 +++++++++++++++++++++++
>>   net/ipv4/inet_hashtables.c |  4 +++-
>>   2 files changed, 26 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/net/sock.h b/include/net/sock.h
>> index 0fd465935334..e709376eaf0a 100644
>> --- a/include/net/sock.h
>> +++ b/include/net/sock.h
>> @@ -854,6 +854,29 @@ static inline bool sk_nulls_del_node_init_rcu(struct sock *sk)
>>          return rc;
>>   }
>>
>> +static inline bool __sk_nulls_replace_node_init_rcu(struct sock *old,
>> +                                                   struct sock *new)
> nit: This can be inlined into sk_nulls_replace_node_init_rcu() as
> there is no caller of __sk_nulls_replace_node_init_rcu().
>
>
>> +{
>> +       if (sk_hashed(old)) {
>> +               hlist_nulls_replace_init_rcu(&old->sk_nulls_node,
>> +                                            &new->sk_nulls_node);
>> +               return true;
>> +       }
>> +       return false;
>> +}
>> +
>> +static inline bool sk_nulls_replace_node_init_rcu(struct sock *old,
>> +                                                 struct sock *new)
>> +{
>> +       bool rc = __sk_nulls_replace_node_init_rcu(old, new);
>> +
>> +       if (rc) {
>> +               WARN_ON(refcount_read(&old->sk_refcnt) == 1);
> nit: DEBUG_NET_WARN_ON_ONCE() would be better as
> this is "paranoid" as commented in sk_del_node_init() etc.

Fix in next version! Thanks Kuniyuki!

>
>> +               __sock_put(old);
>> +       }
>> +       return rc;
>> +}
>> +
>>   static inline void __sk_add_node(struct sock *sk, struct hlist_head *list)
>>   {
>>          hlist_add_head(&sk->sk_node, list);
>> diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
>> index ef4ccfd46ff6..83c9ec625419 100644
>> --- a/net/ipv4/inet_hashtables.c
>> +++ b/net/ipv4/inet_hashtables.c
>> @@ -685,7 +685,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>>          spin_lock(lock);
>>          if (osk) {
>>                  WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
>> -               ret = sk_nulls_del_node_init_rcu(osk);
>> +               ret = sk_nulls_replace_node_init_rcu(osk, sk);
>> +               goto unlock;
>>          } else if (found_dup_sk) {
>>                  *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
>>                  if (*found_dup_sk)
>> @@ -695,6 +696,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
>>          if (ret)
>>                  __sk_nulls_add_node_rcu(sk, list);
>>
>> +unlock:
>>          spin_unlock(lock);
>>
>>          return ret;
>> --
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
  2025-09-23  0:45   ` Kuniyuki Iwashima
@ 2025-09-23  2:07     ` luoxuanqiang
  2025-09-23  3:56       ` Kuniyuki Iwashima
  0 siblings, 1 reply; 12+ messages in thread
From: luoxuanqiang @ 2025-09-23  2:07 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo


在 2025/9/23 08:45, Kuniyuki Iwashima 写道:
> On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>
>> Since ehash lookups are lockless, if another CPU is converting sk to tw
>> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause
>> lookup failure.
>>
>> The call trace map is drawn as follows:
>>     CPU 0                                CPU 1
>>     -----                                -----
>>                                       inet_twsk_hashdance_schedule()
>>                                       spin_lock()
>>                                       inet_twsk_add_node_rcu(tw, ...)
>> __inet_lookup_established()
>> (find tw, failure due to tw_refcnt = 0)
>>                                       __sk_nulls_del_node_init_rcu(sk)
>>                                       refcount_set(&tw->tw_refcnt, 3)
>>                                       spin_unlock()
>>
>> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after
>> setting tw_refcnt, we ensure that tw is either fully initialized or not
>> visible to other CPUs, eliminating the race.
>>
>> It's worth noting that we replace under lock_sock(), so no need to check if sk
>> is hashed. Thanks to Kuniyuki Iwashima!
>>
>> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls")
>> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
> This is not needed.  A pure review does not deserve Suggested-by.
> This is used when someone suggests changing the core idea of
> the patch.

Got it, but still really appreciate your detailed
and patient review!

>
>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>> ---
>>   net/ipv4/inet_timewait_sock.c | 13 ++++---------
>>   1 file changed, 4 insertions(+), 9 deletions(-)
>>
>> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
>> index 5b5426b8ee92..bb98888584a8 100644
>> --- a/net/ipv4/inet_timewait_sock.c
>> +++ b/net/ipv4/inet_timewait_sock.c
>> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>          spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
>>          struct inet_bind_hashbucket *bhead, *bhead2;
>>
>> -       /* Step 1: Put TW into bind hash. Original socket stays there too.
>> +       /* Put TW into bind hash. Original socket stays there too.
>>             Note, that any socket with inet->num != 0 MUST be bound in
>>             binding cache, even if it is closed.
>>           */
>> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>
>>          spin_lock(lock);
>>
>> -       /* Step 2: Hash TW into tcp ehash chain */
>> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
>> -
>> -       /* Step 3: Remove SK from hash chain */
>> -       if (__sk_nulls_del_node_init_rcu(sk))
>> -               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>> -
>> -
>>          /* Ensure above writes are committed into memory before updating the
>>           * refcount.
>>           * Provides ordering vs later refcount_inc().
>> @@ -162,6 +154,9 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>           */
>>          refcount_set(&tw->tw_refcnt, 3);
> I discussed this series with Eric last week, and he pointed out
> (thanks!) that we need to be careful here about memory barrier.
>
> refcount_set() is just WRITE_ONCE() and thus can be reordered,
> and twsk could be published with 0 refcnt, resulting in another RST.
>
Thanks for Eric's pointer!

Could you let me know if my modification here works?

That is, moving smp_wmb() to after the refcount update:

@@ -140,19 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,

         spin_lock(lock);

-       /* Step 2: Hash TW into tcp ehash chain */
-       inet_twsk_add_node_rcu(tw, &ehead->chain);
-
-       /* Step 3: Remove SK from hash chain */
-       if (__sk_nulls_del_node_init_rcu(sk))
-               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
-
-
-       /* Ensure above writes are committed into memory before updating the
-        * refcount.
-        * Provides ordering vs later refcount_inc().
-        */
-       smp_wmb();
         /* tw_refcnt is set to 3 because we have :
          * - one reference for bhash chain.
          * - one reference for ehash chain.
@@ -162,6 +149,14 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
          */
         refcount_set(&tw->tw_refcnt, 3);

+       /* Ensure tw_refcnt has been set before tw is published by
+        * necessary memory barrier.
+        */
+       smp_wmb();
+
+       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
+       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+
         inet_twsk_schedule(tw, timeo);

         spin_unlock(lock);

Thanks!
Xuanqiang

>> +       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
>> +       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>> +
>>          inet_twsk_schedule(tw, timeo);
>>
>>          spin_unlock(lock);
>> --
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
  2025-09-23  2:07     ` luoxuanqiang
@ 2025-09-23  3:56       ` Kuniyuki Iwashima
  2025-09-23  4:11         ` luoxuanqiang
  0 siblings, 1 reply; 12+ messages in thread
From: Kuniyuki Iwashima @ 2025-09-23  3:56 UTC (permalink / raw)
  To: luoxuanqiang
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

On Mon, Sep 22, 2025 at 7:07 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>
>
> 在 2025/9/23 08:45, Kuniyuki Iwashima 写道:
> > On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
> >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>
> >> Since ehash lookups are lockless, if another CPU is converting sk to tw
> >> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause
> >> lookup failure.
> >>
> >> The call trace map is drawn as follows:
> >>     CPU 0                                CPU 1
> >>     -----                                -----
> >>                                       inet_twsk_hashdance_schedule()
> >>                                       spin_lock()
> >>                                       inet_twsk_add_node_rcu(tw, ...)
> >> __inet_lookup_established()
> >> (find tw, failure due to tw_refcnt = 0)
> >>                                       __sk_nulls_del_node_init_rcu(sk)
> >>                                       refcount_set(&tw->tw_refcnt, 3)
> >>                                       spin_unlock()
> >>
> >> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after
> >> setting tw_refcnt, we ensure that tw is either fully initialized or not
> >> visible to other CPUs, eliminating the race.
> >>
> >> It's worth noting that we replace under lock_sock(), so no need to check if sk
> >> is hashed. Thanks to Kuniyuki Iwashima!
> >>
> >> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls")
> >> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
> > This is not needed.  A pure review does not deserve Suggested-by.
> > This is used when someone suggests changing the core idea of
> > the patch.
>
> Got it, but still really appreciate your detailed
> and patient review!
>
> >
> >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >> ---
> >>   net/ipv4/inet_timewait_sock.c | 13 ++++---------
> >>   1 file changed, 4 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
> >> index 5b5426b8ee92..bb98888584a8 100644
> >> --- a/net/ipv4/inet_timewait_sock.c
> >> +++ b/net/ipv4/inet_timewait_sock.c
> >> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
> >>          spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
> >>          struct inet_bind_hashbucket *bhead, *bhead2;
> >>
> >> -       /* Step 1: Put TW into bind hash. Original socket stays there too.
> >> +       /* Put TW into bind hash. Original socket stays there too.
> >>             Note, that any socket with inet->num != 0 MUST be bound in
> >>             binding cache, even if it is closed.
> >>           */
> >> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
> >>
> >>          spin_lock(lock);
> >>
> >> -       /* Step 2: Hash TW into tcp ehash chain */
> >> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
> >> -
> >> -       /* Step 3: Remove SK from hash chain */
> >> -       if (__sk_nulls_del_node_init_rcu(sk))
> >> -               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
> >> -
> >> -
> >>          /* Ensure above writes are committed into memory before updating the
> >>           * refcount.
> >>           * Provides ordering vs later refcount_inc().
> >> @@ -162,6 +154,9 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
> >>           */
> >>          refcount_set(&tw->tw_refcnt, 3);
> > I discussed this series with Eric last week, and he pointed out
> > (thanks!) that we need to be careful here about memory barrier.
> >
> > refcount_set() is just WRITE_ONCE() and thus can be reordered,
> > and twsk could be published with 0 refcnt, resulting in another RST.
> >
> Thanks for Eric's pointer!
>
> Could you let me know if my modification here works?
>
> That is, moving smp_wmb() to after the refcount update:

I think this should be fine, small comment below

>
> @@ -140,19 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>
>          spin_lock(lock);
>
> -       /* Step 2: Hash TW into tcp ehash chain */
> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
> -
> -       /* Step 3: Remove SK from hash chain */
> -       if (__sk_nulls_del_node_init_rcu(sk))
> -               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
> -
> -
> -       /* Ensure above writes are committed into memory before updating the
> -        * refcount.
> -        * Provides ordering vs later refcount_inc().
> -        */
> -       smp_wmb();
>          /* tw_refcnt is set to 3 because we have :
>           * - one reference for bhash chain.
>           * - one reference for ehash chain.
> @@ -162,6 +149,14 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>           */
>          refcount_set(&tw->tw_refcnt, 3);
>
> +       /* Ensure tw_refcnt has been set before tw is published by
> +        * necessary memory barrier.

This sounds like tw is published by memory barrier,
perhaps remove after 'by' ?  It's obvious that the comment
is for smp_wmb() below.



> +        */
> +       smp_wmb();
> +
> +       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
> +       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
> +
>          inet_twsk_schedule(tw, timeo);
>
>          spin_unlock(lock);
>
> Thanks!
> Xuanqiang
>
> >> +       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
> >> +       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
> >> +
> >>          inet_twsk_schedule(tw, timeo);
> >>
> >>          spin_unlock(lock);
> >> --
> >> 2.25.1
> >>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
  2025-09-23  3:56       ` Kuniyuki Iwashima
@ 2025-09-23  4:11         ` luoxuanqiang
  0 siblings, 0 replies; 12+ messages in thread
From: luoxuanqiang @ 2025-09-23  4:11 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo


在 2025/9/23 11:56, Kuniyuki Iwashima 写道:
> On Mon, Sep 22, 2025 at 7:07 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>
>> 在 2025/9/23 08:45, Kuniyuki Iwashima 写道:
>>> On Sat, Sep 20, 2025 at 4:00 AM <xuanqiang.luo@linux.dev> wrote:
>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>
>>>> Since ehash lookups are lockless, if another CPU is converting sk to tw
>>>> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause
>>>> lookup failure.
>>>>
>>>> The call trace map is drawn as follows:
>>>>      CPU 0                                CPU 1
>>>>      -----                                -----
>>>>                                        inet_twsk_hashdance_schedule()
>>>>                                        spin_lock()
>>>>                                        inet_twsk_add_node_rcu(tw, ...)
>>>> __inet_lookup_established()
>>>> (find tw, failure due to tw_refcnt = 0)
>>>>                                        __sk_nulls_del_node_init_rcu(sk)
>>>>                                        refcount_set(&tw->tw_refcnt, 3)
>>>>                                        spin_unlock()
>>>>
>>>> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after
>>>> setting tw_refcnt, we ensure that tw is either fully initialized or not
>>>> visible to other CPUs, eliminating the race.
>>>>
>>>> It's worth noting that we replace under lock_sock(), so no need to check if sk
>>>> is hashed. Thanks to Kuniyuki Iwashima!
>>>>
>>>> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls")
>>>> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
>>> This is not needed.  A pure review does not deserve Suggested-by.
>>> This is used when someone suggests changing the core idea of
>>> the patch.
>> Got it, but still really appreciate your detailed
>> and patient review!
>>
>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>> ---
>>>>    net/ipv4/inet_timewait_sock.c | 13 ++++---------
>>>>    1 file changed, 4 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
>>>> index 5b5426b8ee92..bb98888584a8 100644
>>>> --- a/net/ipv4/inet_timewait_sock.c
>>>> +++ b/net/ipv4/inet_timewait_sock.c
>>>> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>>>           spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
>>>>           struct inet_bind_hashbucket *bhead, *bhead2;
>>>>
>>>> -       /* Step 1: Put TW into bind hash. Original socket stays there too.
>>>> +       /* Put TW into bind hash. Original socket stays there too.
>>>>              Note, that any socket with inet->num != 0 MUST be bound in
>>>>              binding cache, even if it is closed.
>>>>            */
>>>> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>>>
>>>>           spin_lock(lock);
>>>>
>>>> -       /* Step 2: Hash TW into tcp ehash chain */
>>>> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
>>>> -
>>>> -       /* Step 3: Remove SK from hash chain */
>>>> -       if (__sk_nulls_del_node_init_rcu(sk))
>>>> -               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>>>> -
>>>> -
>>>>           /* Ensure above writes are committed into memory before updating the
>>>>            * refcount.
>>>>            * Provides ordering vs later refcount_inc().
>>>> @@ -162,6 +154,9 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>>>            */
>>>>           refcount_set(&tw->tw_refcnt, 3);
>>> I discussed this series with Eric last week, and he pointed out
>>> (thanks!) that we need to be careful here about memory barrier.
>>>
>>> refcount_set() is just WRITE_ONCE() and thus can be reordered,
>>> and twsk could be published with 0 refcnt, resulting in another RST.
>>>
>> Thanks for Eric's pointer!
>>
>> Could you let me know if my modification here works?
>>
>> That is, moving smp_wmb() to after the refcount update:
> I think this should be fine, small comment below
>
>> @@ -140,19 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>
>>           spin_lock(lock);
>>
>> -       /* Step 2: Hash TW into tcp ehash chain */
>> -       inet_twsk_add_node_rcu(tw, &ehead->chain);
>> -
>> -       /* Step 3: Remove SK from hash chain */
>> -       if (__sk_nulls_del_node_init_rcu(sk))
>> -               sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>> -
>> -
>> -       /* Ensure above writes are committed into memory before updating the
>> -        * refcount.
>> -        * Provides ordering vs later refcount_inc().
>> -        */
>> -       smp_wmb();
>>           /* tw_refcnt is set to 3 because we have :
>>            * - one reference for bhash chain.
>>            * - one reference for ehash chain.
>> @@ -162,6 +149,14 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
>>            */
>>           refcount_set(&tw->tw_refcnt, 3);
>>
>> +       /* Ensure tw_refcnt has been set before tw is published by
>> +        * necessary memory barrier.
> This sounds like tw is published by memory barrier,
> perhaps remove after 'by' ?  It's obvious that the comment
> is for smp_wmb() below.

I'm sorry for the confusion caused by my poor English.

I will express it more clearly in the next version,
like the following:

@@ -162,6 +149,15 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
          */
         refcount_set(&tw->tw_refcnt, 3);

+       /* Ensure tw_refcnt has been set before tw is published.
+        * smp_wmb() provides the necessary memory barrier to enforce this
+        * ordering.
+        */
+       smp_wmb();
+
+       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
+       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+
         inet_twsk_schedule(tw, timeo);
         spin_unlock(lock);

Thanks!
Xuanqiang

>
>
>> +        */
>> +       smp_wmb();
>> +
>> +       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
>> +       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>> +
>>           inet_twsk_schedule(tw, timeo);
>>
>>           spin_unlock(lock);
>>
>> Thanks!
>> Xuanqiang
>>
>>>> +       hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
>>>> +       sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
>>>> +
>>>>           inet_twsk_schedule(tw, timeo);
>>>>
>>>>           spin_unlock(lock);
>>>> --
>>>> 2.25.1
>>>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-09-23  4:12 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-20 10:59 [PATCH net-next v4 0/3] net: Avoid ehash lookup races xuanqiang.luo
2025-09-20 10:59 ` [PATCH net-next v4 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
2025-09-23  0:19   ` Kuniyuki Iwashima
2025-09-23  1:59     ` luoxuanqiang
2025-09-20 10:59 ` [PATCH net-next v4 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
2025-09-23  0:23   ` Kuniyuki Iwashima
2025-09-23  2:00     ` luoxuanqiang
2025-09-20 10:59 ` [PATCH net-next v4 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
2025-09-23  0:45   ` Kuniyuki Iwashima
2025-09-23  2:07     ` luoxuanqiang
2025-09-23  3:56       ` Kuniyuki Iwashima
2025-09-23  4:11         ` luoxuanqiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).