netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v7 0/3] net: Avoid ehash lookup races
@ 2025-09-26  7:40 xuanqiang.luo
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: xuanqiang.luo @ 2025-09-26  7:40 UTC (permalink / raw)
  To: edumazet, kuniyu, Paul E. McKenney
  Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

After replacing R/W locks with RCU in commit 3ab5aee7fe84 ("net: Convert
TCP & DCCP hash tables to use RCU / hlist_nulls"), a race window emerged
during the switch from reqsk/sk to sk/tw.

Now that both timewait sock (tw) and full sock (sk) reside on the same
ehash chain, it is appropriate to introduce hlist_nulls replace
operations, to eliminate the race conditions caused by this window.

Before this series of patches, I previously sent another version of the
patch, attempting to avoid the issue using a lock mechanism. However, it
seems there are some problems with that approach now, so I've switched to
the "replace" method in the current patches to resolve the issue.
For details, refer to:
https://lore.kernel.org/netdev/20250903024406.2418362-1-xuanqiang.luo@linux.dev/

Before I encountered this type of issue recently, I found there had been
several historical discussions about it. Therefore, I'm adding this
background information for those interested to reference:
1. https://lore.kernel.org/lkml/20230118015941.1313-1-kerneljasonxing@gmail.com/
2. https://lore.kernel.org/netdev/20230606064306.9192-1-duanmuquan@baidu.com/

---

Changes:
  v7:
    * Patch1
	* Fix the checkpatch complaints.
	* Introduces hlist_nulls_pprev_rcu() to replace
	  (*((struct hlist_nulls_node __rcu __force **)(node)->pprev)).
	* Use next->pprev instead of new->next->pprev.
    * Patch2
	* Remove else if in inet_ehash_insert(), use if instead.
    * Patch3
	* Fix legacy comment style issues in
	  inet_twsk_hashdance_schedule().

  v6: https://lore.kernel.org/all/20250925021628.886203-1-xuanqiang.luo@linux.dev/
    * Patch 1
        * Send and CC to the RCU maintainers.
    * Patch 3
        * Remove the unused function inet_twsk_add_node_rcu() and variable
          ehead fix build warnings.

  v5: https://lore.kernel.org/all/20250924015034.587056-1-xuanqiang.luo@linux.dev/
    * Patch 1
        * Rename __hlist_nulls_replace_rcu() to hlist_nulls_replace_rcu()
          and update the description of hlist_nulls_replace_init_rcu().
    * Patch 2
        * Remove __sk_nulls_replace_node_init_rcu() and inline it into
          sk_nulls_replace_node_init_rcu().
        * Use DEBUG_NET_WARN_ON_ONCE() instead of WARN_ON().
    * Patch 3
        * Move smp_wmb() after setting the refcount.

  v4: https://lore.kernel.org/all/20250920105945.538042-1-xuanqiang.luo@linux.dev/
    * Patch 1
        * Use WRITE_ONCE() for ->next in __hlist_nulls_replace_rcu(), and
          add why in the commit message.
        * Remove the node hash check in hlist_nulls_replace_init_rcu() to
          avoid redundancy. Also remove the return value, as it serves no
          purpose in this patch series.
    * Patch 3
        * Remove the check of hlist_nulls_replace_init_rcu() return value
          in inet_twsk_hashdance_schedule() as it is unnecessary.
          Thanks to Kuni for clarifying this.

  v3: https://lore.kernel.org/all/20250916103054.719584-1-xuanqiang.luo@linux.dev/
    * Add more background information on this type of issue to the letter
      cover.

  v2: https://lore.kernel.org/all/20250916064614.605075-1-xuanqiang.luo@linux.dev/
    * Patch 1
        * Use WRITE_ONCE() to initialize old->pprev.
    * Patch 2&3
        * Optimize sk hashed check. Thanks Kuni for pointing it out!

  v1: https://lore.kernel.org/all/20250915070308.111816-1-xuanqiang.luo@linux.dev/

Xuanqiang Luo (3):
  rculist: Add hlist_nulls_replace_rcu() and
    hlist_nulls_replace_init_rcu()
  inet: Avoid ehash lookup race in inet_ehash_insert()
  inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()

 include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
 include/net/sock.h            | 14 +++++++++
 net/ipv4/inet_hashtables.c    |  8 +++--
 net/ipv4/inet_timewait_sock.c | 35 +++++++--------------
 4 files changed, 91 insertions(+), 25 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-26  7:40 [PATCH net-next v7 0/3] net: Avoid ehash lookup races xuanqiang.luo
@ 2025-09-26  7:40 ` xuanqiang.luo
  2025-09-27 20:31   ` Kuniyuki Iwashima
                     ` (3 more replies)
  2025-09-26  7:40 ` [PATCH net-next v7 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
                   ` (2 subsequent siblings)
  3 siblings, 4 replies; 24+ messages in thread
From: xuanqiang.luo @ 2025-09-26  7:40 UTC (permalink / raw)
  To: edumazet, kuniyu, Paul E. McKenney
  Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Add two functions to atomically replace RCU-protected hlist_nulls entries.

Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
mentioned in the patch below:
commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
rculist_nulls")
commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
hlist_nulls")

Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
---
 include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
index 89186c499dd4..c26cb83ca071 100644
--- a/include/linux/rculist_nulls.h
+++ b/include/linux/rculist_nulls.h
@@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
 #define hlist_nulls_next_rcu(node) \
 	(*((struct hlist_nulls_node __rcu __force **)&(node)->next))
 
+/**
+ * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
+ * @node: element of the list.
+ */
+#define hlist_nulls_pprev_rcu(node) \
+	(*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
+
 /**
  * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
  * @n: the element to delete from the hash list.
@@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
 	n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
 }
 
+/**
+ * hlist_nulls_replace_rcu - replace an old entry by a new one
+ * @old: the element to be replaced
+ * @new: the new element to insert
+ *
+ * Description:
+ * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
+ * permitting racing traversals.
+ *
+ * The caller must take whatever precautions are necessary (such as holding
+ * appropriate locks) to avoid racing with another list-mutation primitive, such
+ * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
+ * list.  However, it is perfectly legal to run concurrently with the _rcu
+ * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
+ */
+static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
+					   struct hlist_nulls_node *new)
+{
+	struct hlist_nulls_node *next = old->next;
+
+	WRITE_ONCE(new->next, next);
+	WRITE_ONCE(new->pprev, old->pprev);
+	rcu_assign_pointer(hlist_nulls_pprev_rcu(new), new);
+	if (!is_a_nulls(next))
+		WRITE_ONCE(next->pprev, &new->next);
+}
+
+/**
+ * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
+ * initialize the old
+ * @old: the element to be replaced
+ * @new: the new element to insert
+ *
+ * Description:
+ * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
+ * permitting racing traversals, and reinitialize the old entry.
+ *
+ * Note: @old must be hashed.
+ *
+ * The caller must take whatever precautions are necessary (such as holding
+ * appropriate locks) to avoid racing with another list-mutation primitive, such
+ * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
+ * list. However, it is perfectly legal to run concurrently with the _rcu
+ * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
+ */
+static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
+						struct hlist_nulls_node *new)
+{
+	hlist_nulls_replace_rcu(old, new);
+	WRITE_ONCE(old->pprev, NULL);
+}
+
 /**
  * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
  * @tpos:	the type * to use as a loop cursor.
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next v7 2/3] inet: Avoid ehash lookup race in inet_ehash_insert()
  2025-09-26  7:40 [PATCH net-next v7 0/3] net: Avoid ehash lookup races xuanqiang.luo
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
@ 2025-09-26  7:40 ` xuanqiang.luo
  2025-09-26  7:40 ` [PATCH net-next v7 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
  2025-09-27  2:56 ` [PATCH net-next v7 0/3] net: Avoid ehash lookup races Jiayuan Chen
  3 siblings, 0 replies; 24+ messages in thread
From: xuanqiang.luo @ 2025-09-26  7:40 UTC (permalink / raw)
  To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Since ehash lookups are lockless, if one CPU performs a lookup while
another concurrently deletes and inserts (removing reqsk and inserting sk),
the lookup may fail to find the socket, an RST may be sent.

The call trace map is drawn as follows:
   CPU 0                           CPU 1
   -----                           -----
				inet_ehash_insert()
                                spin_lock()
                                sk_nulls_del_node_init_rcu(osk)
__inet_lookup_established()
	(lookup failed)
                                __sk_nulls_add_node_rcu(sk, list)
                                spin_unlock()

As both deletion and insertion operate on the same ehash chain, this patch
introduces a new sk_nulls_replace_node_init_rcu() helper functions to
implement atomic replacement.

Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions")
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
---
 include/net/sock.h         | 14 ++++++++++++++
 net/ipv4/inet_hashtables.c |  8 ++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 0fd465935334..5d67f5cbae52 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -854,6 +854,20 @@ static inline bool sk_nulls_del_node_init_rcu(struct sock *sk)
 	return rc;
 }
 
+static inline bool sk_nulls_replace_node_init_rcu(struct sock *old,
+						  struct sock *new)
+{
+	if (sk_hashed(old)) {
+		hlist_nulls_replace_init_rcu(&old->sk_nulls_node,
+					     &new->sk_nulls_node);
+		DEBUG_NET_WARN_ON_ONCE(refcount_read(&old->sk_refcnt) == 1);
+		__sock_put(old);
+		return true;
+	}
+
+	return false;
+}
+
 static inline void __sk_add_node(struct sock *sk, struct hlist_head *list)
 {
 	hlist_add_head(&sk->sk_node, list);
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index ef4ccfd46ff6..1ec224562494 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -685,8 +685,11 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	spin_lock(lock);
 	if (osk) {
 		WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
-		ret = sk_nulls_del_node_init_rcu(osk);
-	} else if (found_dup_sk) {
+		ret = sk_nulls_replace_node_init_rcu(osk, sk);
+		goto unlock;
+	}
+
+	if (found_dup_sk) {
 		*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
 		if (*found_dup_sk)
 			ret = false;
@@ -695,6 +698,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
 	if (ret)
 		__sk_nulls_add_node_rcu(sk, list);
 
+unlock:
 	spin_unlock(lock);
 
 	return ret;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH net-next v7 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
  2025-09-26  7:40 [PATCH net-next v7 0/3] net: Avoid ehash lookup races xuanqiang.luo
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
  2025-09-26  7:40 ` [PATCH net-next v7 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
@ 2025-09-26  7:40 ` xuanqiang.luo
  2025-09-27  2:56 ` [PATCH net-next v7 0/3] net: Avoid ehash lookup races Jiayuan Chen
  3 siblings, 0 replies; 24+ messages in thread
From: xuanqiang.luo @ 2025-09-26  7:40 UTC (permalink / raw)
  To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo

From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Since ehash lookups are lockless, if another CPU is converting sk to tw
concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause
lookup failure.

The call trace map is drawn as follows:
   CPU 0                                CPU 1
   -----                                -----
				     inet_twsk_hashdance_schedule()
				     spin_lock()
				     inet_twsk_add_node_rcu(tw, ...)
__inet_lookup_established()
(find tw, failure due to tw_refcnt = 0)
				     __sk_nulls_del_node_init_rcu(sk)
				     refcount_set(&tw->tw_refcnt, 3)
				     spin_unlock()

By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after
setting tw_refcnt, we ensure that tw is either fully initialized or not
visible to other CPUs, eliminating the race.

It's worth noting that we held lock_sock() before the replacement, so
there's no need to check if sk is hashed. Thanks to Kuniyuki Iwashima!

Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls")
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
---
 net/ipv4/inet_timewait_sock.c | 35 ++++++++++++-----------------------
 1 file changed, 12 insertions(+), 23 deletions(-)

diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 5b5426b8ee92..3bc0e011b51a 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -87,12 +87,6 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
 }
 EXPORT_SYMBOL_GPL(inet_twsk_put);
 
-static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
-				   struct hlist_nulls_head *list)
-{
-	hlist_nulls_add_head_rcu(&tw->tw_node, list);
-}
-
 static void inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo)
 {
 	__inet_twsk_schedule(tw, timeo, false);
@@ -112,13 +106,12 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
 {
 	const struct inet_sock *inet = inet_sk(sk);
 	const struct inet_connection_sock *icsk = inet_csk(sk);
-	struct inet_ehash_bucket *ehead = inet_ehash_bucket(hashinfo, sk->sk_hash);
 	spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
 	struct inet_bind_hashbucket *bhead, *bhead2;
 
-	/* Step 1: Put TW into bind hash. Original socket stays there too.
-	   Note, that any socket with inet->num != 0 MUST be bound in
-	   binding cache, even if it is closed.
+	/* Put TW into bind hash. Original socket stays there too.
+	 * Note, that any socket with inet->num != 0 MUST be bound in
+	 * binding cache, even if it is closed.
 	 */
 	bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), inet->inet_num,
 			hashinfo->bhash_size)];
@@ -140,19 +133,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
 
 	spin_lock(lock);
 
-	/* Step 2: Hash TW into tcp ehash chain */
-	inet_twsk_add_node_rcu(tw, &ehead->chain);
-
-	/* Step 3: Remove SK from hash chain */
-	if (__sk_nulls_del_node_init_rcu(sk))
-		sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
-
-
-	/* Ensure above writes are committed into memory before updating the
-	 * refcount.
-	 * Provides ordering vs later refcount_inc().
-	 */
-	smp_wmb();
 	/* tw_refcnt is set to 3 because we have :
 	 * - one reference for bhash chain.
 	 * - one reference for ehash chain.
@@ -162,6 +142,15 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw,
 	 */
 	refcount_set(&tw->tw_refcnt, 3);
 
+	/* Ensure tw_refcnt has been set before tw is published.
+	 * smp_wmb() provides the necessary memory barrier to enforce this
+	 * ordering.
+	 */
+	smp_wmb();
+
+	hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node);
+	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+
 	inet_twsk_schedule(tw, timeo);
 
 	spin_unlock(lock);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 0/3] net: Avoid ehash lookup races
  2025-09-26  7:40 [PATCH net-next v7 0/3] net: Avoid ehash lookup races xuanqiang.luo
                   ` (2 preceding siblings ...)
  2025-09-26  7:40 ` [PATCH net-next v7 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
@ 2025-09-27  2:56 ` Jiayuan Chen
  3 siblings, 0 replies; 24+ messages in thread
From: Jiayuan Chen @ 2025-09-27  2:56 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: edumazet, kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba,
	netdev, Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Fri, Sep 26, 2025 at 03:40:30PM +0800, xuanqiang.luo@linux.dev wrote:
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> 
> After replacing R/W locks with RCU in commit 3ab5aee7fe84 ("net: Convert
> TCP & DCCP hash tables to use RCU / hlist_nulls"), a race window emerged
> during the switch from reqsk/sk to sk/tw.
> 
> Now that both timewait sock (tw) and full sock (sk) reside on the same
> ehash chain, it is appropriate to introduce hlist_nulls replace
> operations, to eliminate the race conditions caused by this window.
> 
> Before this series of patches, I previously sent another version of the
> patch, attempting to avoid the issue using a lock mechanism. However, it
> seems there are some problems with that approach now, so I've switched to
> the "replace" method in the current patches to resolve the issue.
> For details, refer to:
> https://lore.kernel.org/netdev/20250903024406.2418362-1-xuanqiang.luo@linux.dev/
> 
> Before I encountered this type of issue recently, I found there had been
> several historical discussions about it. Therefore, I'm adding this
> background information for those interested to reference:
> 1. https://lore.kernel.org/lkml/20230118015941.1313-1-kerneljasonxing@gmail.com/
> 2. https://lore.kernel.org/netdev/20230606064306.9192-1-duanmuquan@baidu.com/


Reviewed-by: Jiayuan Chen <jiayuan.chen@linux.dev>
---

Thank you Xuanqiang and Kuniyuki. This issue appears to have existed for a
long time. Under normal circumstances, it can be avoided when RSS or RPS is
enabled.

However, we have recently been experiencing it frequently in our production
environment. The root cause is that TCP traffic is encapsulated using VXLAN,
but the same TCP flow does not use the same UDP 4-tuple. This leads to
concurrency when the host processes the VXLAN encapsulation.

I tested this patch and it fixed this issue.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
@ 2025-09-27 20:31   ` Kuniyuki Iwashima
  2025-09-30  9:16   ` Paolo Abeni
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 24+ messages in thread
From: Kuniyuki Iwashima @ 2025-09-27 20:31 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: edumazet, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>
> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>
> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> mentioned in the patch below:
> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> rculist_nulls")
> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> hlist_nulls")
>
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>

Thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
  2025-09-27 20:31   ` Kuniyuki Iwashima
@ 2025-09-30  9:16   ` Paolo Abeni
  2025-10-01 15:03     ` luoxuanqiang
  2025-10-13  5:36     ` Jiayuan Chen
  2025-10-01 12:19   ` Frederic Weisbecker
  2025-10-13  7:31   ` Eric Dumazet
  3 siblings, 2 replies; 24+ messages in thread
From: Paolo Abeni @ 2025-09-30  9:16 UTC (permalink / raw)
  To: xuanqiang.luo, edumazet, kuniyu, Paul E. McKenney
  Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay

On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> 
> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> 
> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> mentioned in the patch below:
> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> rculist_nulls")
> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> hlist_nulls")
> 
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

This deserves explicit ack from RCU maintainers.

Since we are finalizing the net-next PR, I suggest to defer this series
to the next cycle, to avoid rushing such request.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
  2025-09-27 20:31   ` Kuniyuki Iwashima
  2025-09-30  9:16   ` Paolo Abeni
@ 2025-10-01 12:19   ` Frederic Weisbecker
  2025-10-13  7:31   ` Eric Dumazet
  3 siblings, 0 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2025-10-01 12:19 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: edumazet, kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba,
	netdev, Xuanqiang Luo, Neeraj Upadhyay

Le Fri, Sep 26, 2025 at 03:40:31PM +0800, xuanqiang.luo@linux.dev a écrit :
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> 
> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> 
> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> mentioned in the patch below:
> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> rculist_nulls")
> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> hlist_nulls")
> 
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>

Reviewed-by: Frederic Weisbecker <frederic@kernel.org>

-- 
Frederic Weisbecker
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-30  9:16   ` Paolo Abeni
@ 2025-10-01 15:03     ` luoxuanqiang
  2025-10-13  5:36     ` Jiayuan Chen
  1 sibling, 0 replies; 24+ messages in thread
From: luoxuanqiang @ 2025-10-01 15:03 UTC (permalink / raw)
  To: Paolo Abeni, edumazet, kuniyu, Paul E. McKenney
  Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay


在 2025/9/30 17:16, Paolo Abeni 写道:
> On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>
>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>
>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
>> mentioned in the patch below:
>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
>> rculist_nulls")
>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
>> hlist_nulls")
>>
>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> This deserves explicit ack from RCU maintainers.
>
> Since we are finalizing the net-next PR, I suggest to defer this series
> to the next cycle, to avoid rushing such request.
>
> Thanks,
>
> Paolo
>
No problem. Thanks for noticing this series of patches!

I'll resend them in the next cycle if they get forgotten.

Thanks!
Xuanqiang


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-30  9:16   ` Paolo Abeni
  2025-10-01 15:03     ` luoxuanqiang
@ 2025-10-13  5:36     ` Jiayuan Chen
  2025-10-13  6:26       ` Jason Xing
  1 sibling, 1 reply; 24+ messages in thread
From: Jiayuan Chen @ 2025-10-13  5:36 UTC (permalink / raw)
  To: Paolo Abeni, kuba, edumazet, davem, horms, kuniyu
  Cc: xuanqiang.luo, edumazet, kuniyu, Paul E. McKenney,
	kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay

On Tue, Sep 30, 2025 at 11:16:00AM +0800, Paolo Abeni wrote:
> On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
> > From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> > 
> > Add two functions to atomically replace RCU-protected hlist_nulls entries.
[...]
> > 
> > Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> 
> This deserves explicit ack from RCU maintainers.
> 
> Since we are finalizing the net-next PR, I suggest to defer this series
> to the next cycle, to avoid rushing such request.
> 
> Thanks,
> 
> Paolo

Hi maintainers,

This patch was previously held off due to the merge window.

Now that the merge net-next has open and no further changes are required,
could we please consider merging it directly?

Apologies for the slight push, but I'm hoping we can get a formal
commit backported to our production branch.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13  5:36     ` Jiayuan Chen
@ 2025-10-13  6:26       ` Jason Xing
  2025-10-13  7:04         ` luoxuanqiang
  0 siblings, 1 reply; 24+ messages in thread
From: Jason Xing @ 2025-10-13  6:26 UTC (permalink / raw)
  To: Jiayuan Chen
  Cc: Paolo Abeni, kuba, edumazet, davem, horms, kuniyu, xuanqiang.luo,
	Paul E. McKenney, netdev, Xuanqiang Luo, Frederic Weisbecker,
	Neeraj Upadhyay

On Mon, Oct 13, 2025 at 1:36 PM Jiayuan Chen <jiayuan.chen@linux.dev> wrote:
>
> On Tue, Sep 30, 2025 at 11:16:00AM +0800, Paolo Abeni wrote:
> > On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
> > > From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> > >
> > > Add two functions to atomically replace RCU-protected hlist_nulls entries.
> [...]
> > >
> > > Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >
> > This deserves explicit ack from RCU maintainers.
> >
> > Since we are finalizing the net-next PR, I suggest to defer this series
> > to the next cycle, to avoid rushing such request.
> >
> > Thanks,
> >
> > Paolo
>
> Hi maintainers,
>
> This patch was previously held off due to the merge window.
>
> Now that the merge net-next has open and no further changes are required,
> could we please consider merging it directly?
>
> Apologies for the slight push, but I'm hoping we can get a formal
> commit backported to our production branch.

I suppose a new version that needs to be rebased is necessary.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13  6:26       ` Jason Xing
@ 2025-10-13  7:04         ` luoxuanqiang
  2025-10-13 12:08           ` Simon Horman
  0 siblings, 1 reply; 24+ messages in thread
From: luoxuanqiang @ 2025-10-13  7:04 UTC (permalink / raw)
  To: Jason Xing, Jiayuan Chen
  Cc: Paolo Abeni, kuba, edumazet, davem, horms, kuniyu,
	Paul E. McKenney, netdev, Xuanqiang Luo, Frederic Weisbecker,
	Neeraj Upadhyay


在 2025/10/13 14:26, Jason Xing 写道:
> On Mon, Oct 13, 2025 at 1:36 PM Jiayuan Chen <jiayuan.chen@linux.dev> wrote:
>> On Tue, Sep 30, 2025 at 11:16:00AM +0800, Paolo Abeni wrote:
>>> On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>
>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>> [...]
>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>> This deserves explicit ack from RCU maintainers.
>>>
>>> Since we are finalizing the net-next PR, I suggest to defer this series
>>> to the next cycle, to avoid rushing such request.
>>>
>>> Thanks,
>>>
>>> Paolo
>> Hi maintainers,
>>
>> This patch was previously held off due to the merge window.
>>
>> Now that the merge net-next has open and no further changes are required,
>> could we please consider merging it directly?
>>
>> Apologies for the slight push, but I'm hoping we can get a formal
>> commit backported to our production branch.
> I suppose a new version that needs to be rebased is necessary.
>
> Thanks,
> Jason

I’ve rebased the series of patches onto the latest codebase locally and
didn’t encounter any errors.

If there’s anything else I can do to help get these patches merged, just
let me know.

Thanks!


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
                     ` (2 preceding siblings ...)
  2025-10-01 12:19   ` Frederic Weisbecker
@ 2025-10-13  7:31   ` Eric Dumazet
  2025-10-13  8:25     ` luoxuanqiang
  3 siblings, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2025-10-13  7:31 UTC (permalink / raw)
  To: xuanqiang.luo
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>
> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>
> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>
> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> mentioned in the patch below:
> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> rculist_nulls")
> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> hlist_nulls")
>
> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> ---
>  include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
>  1 file changed, 59 insertions(+)
>
> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> index 89186c499dd4..c26cb83ca071 100644
> --- a/include/linux/rculist_nulls.h
> +++ b/include/linux/rculist_nulls.h
> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
>  #define hlist_nulls_next_rcu(node) \
>         (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
>
> +/**
> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
> + * @node: element of the list.
> + */
> +#define hlist_nulls_pprev_rcu(node) \
> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
> +
>  /**
>   * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
>   * @n: the element to delete from the hash list.
> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>         n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>  }
>
> +/**
> + * hlist_nulls_replace_rcu - replace an old entry by a new one
> + * @old: the element to be replaced
> + * @new: the new element to insert
> + *
> + * Description:
> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> + * permitting racing traversals.
> + *
> + * The caller must take whatever precautions are necessary (such as holding
> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> + * list.  However, it is perfectly legal to run concurrently with the _rcu
> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> + */
> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> +                                          struct hlist_nulls_node *new)
> +{
> +       struct hlist_nulls_node *next = old->next;
> +
> +       WRITE_ONCE(new->next, next);
> +       WRITE_ONCE(new->pprev, old->pprev);
I do not think these two WRITE_ONCE() are needed.

At this point new is not yet visible.

The following  rcu_assign_pointer() is enough to make sure prior
writes are committed to memory.

> +       rcu_assign_pointer(hlist_nulls_pprev_rcu(new), new);
> +       if (!is_a_nulls(next))
> +               WRITE_ONCE(next->pprev, &new->next);
> +}
> +
> +/**
> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
> + * initialize the old
> + * @old: the element to be replaced
> + * @new: the new element to insert
> + *
> + * Description:
> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> + * permitting racing traversals, and reinitialize the old entry.
> + *
> + * Note: @old must be hashed.
> + *
> + * The caller must take whatever precautions are necessary (such as holding
> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> + * list. However, it is perfectly legal to run concurrently with the _rcu
> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> + */
> +static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
> +                                               struct hlist_nulls_node *new)
> +{
> +       hlist_nulls_replace_rcu(old, new);
> +       WRITE_ONCE(old->pprev, NULL);
> +}
> +
>  /**
>   * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
>   * @tpos:      the type * to use as a loop cursor.
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13  7:31   ` Eric Dumazet
@ 2025-10-13  8:25     ` luoxuanqiang
  2025-10-13  9:49       ` Eric Dumazet
  0 siblings, 1 reply; 24+ messages in thread
From: luoxuanqiang @ 2025-10-13  8:25 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay


在 2025/10/13 15:31, Eric Dumazet 写道:
> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>
>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>
>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
>> mentioned in the patch below:
>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
>> rculist_nulls")
>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
>> hlist_nulls")
>>
>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>> ---
>>   include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
>>   1 file changed, 59 insertions(+)
>>
>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
>> index 89186c499dd4..c26cb83ca071 100644
>> --- a/include/linux/rculist_nulls.h
>> +++ b/include/linux/rculist_nulls.h
>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
>>   #define hlist_nulls_next_rcu(node) \
>>          (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
>>
>> +/**
>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
>> + * @node: element of the list.
>> + */
>> +#define hlist_nulls_pprev_rcu(node) \
>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
>> +
>>   /**
>>    * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
>>    * @n: the element to delete from the hash list.
>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>>          n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>>   }
>>
>> +/**
>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
>> + * @old: the element to be replaced
>> + * @new: the new element to insert
>> + *
>> + * Description:
>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>> + * permitting racing traversals.
>> + *
>> + * The caller must take whatever precautions are necessary (such as holding
>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>> + */
>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
>> +                                          struct hlist_nulls_node *new)
>> +{
>> +       struct hlist_nulls_node *next = old->next;
>> +
>> +       WRITE_ONCE(new->next, next);
>> +       WRITE_ONCE(new->pprev, old->pprev);
> I do not think these two WRITE_ONCE() are needed.
>
> At this point new is not yet visible.
>
> The following  rcu_assign_pointer() is enough to make sure prior
> writes are committed to memory.

Dear Eric,

I’m quoting your more detailed explanation from the other patch [0], thank
you for that!

However, regarding new->next, if the new object is allocated with
SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
rculist_nulls”)?

Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?

[0]: https://lore.kernel.org/all/CANn89iKQM=4wjCLxpg-m3jYoUm=rsSk68xVLN2902di2+FkSFg@mail.gmail.com/

Thanks!

>> +       rcu_assign_pointer(hlist_nulls_pprev_rcu(new), new);
>> +       if (!is_a_nulls(next))
>> +               WRITE_ONCE(next->pprev, &new->next);
>> +}
>> +
>> +/**
>> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
>> + * initialize the old
>> + * @old: the element to be replaced
>> + * @new: the new element to insert
>> + *
>> + * Description:
>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>> + * permitting racing traversals, and reinitialize the old entry.
>> + *
>> + * Note: @old must be hashed.
>> + *
>> + * The caller must take whatever precautions are necessary (such as holding
>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>> + * list. However, it is perfectly legal to run concurrently with the _rcu
>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>> + */
>> +static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
>> +                                               struct hlist_nulls_node *new)
>> +{
>> +       hlist_nulls_replace_rcu(old, new);
>> +       WRITE_ONCE(old->pprev, NULL);
>> +}
>> +
>>   /**
>>    * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
>>    * @tpos:      the type * to use as a loop cursor.
>> --
>> 2.25.1
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13  8:25     ` luoxuanqiang
@ 2025-10-13  9:49       ` Eric Dumazet
  2025-10-14  7:20         ` luoxuanqiang
  0 siblings, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2025-10-13  9:49 UTC (permalink / raw)
  To: luoxuanqiang
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>
>
> 在 2025/10/13 15:31, Eric Dumazet 写道:
> > On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
> >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>
> >> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> >>
> >> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> >> mentioned in the patch below:
> >> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> >> rculist_nulls")
> >> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >> hlist_nulls")
> >>
> >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >> ---
> >>   include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
> >>   1 file changed, 59 insertions(+)
> >>
> >> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> >> index 89186c499dd4..c26cb83ca071 100644
> >> --- a/include/linux/rculist_nulls.h
> >> +++ b/include/linux/rculist_nulls.h
> >> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
> >>   #define hlist_nulls_next_rcu(node) \
> >>          (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
> >>
> >> +/**
> >> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
> >> + * @node: element of the list.
> >> + */
> >> +#define hlist_nulls_pprev_rcu(node) \
> >> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
> >> +
> >>   /**
> >>    * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
> >>    * @n: the element to delete from the hash list.
> >> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
> >>          n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
> >>   }
> >>
> >> +/**
> >> + * hlist_nulls_replace_rcu - replace an old entry by a new one
> >> + * @old: the element to be replaced
> >> + * @new: the new element to insert
> >> + *
> >> + * Description:
> >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >> + * permitting racing traversals.
> >> + *
> >> + * The caller must take whatever precautions are necessary (such as holding
> >> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >> + * list.  However, it is perfectly legal to run concurrently with the _rcu
> >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >> + */
> >> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> >> +                                          struct hlist_nulls_node *new)
> >> +{
> >> +       struct hlist_nulls_node *next = old->next;
> >> +
> >> +       WRITE_ONCE(new->next, next);
> >> +       WRITE_ONCE(new->pprev, old->pprev);
> > I do not think these two WRITE_ONCE() are needed.
> >
> > At this point new is not yet visible.
> >
> > The following  rcu_assign_pointer() is enough to make sure prior
> > writes are committed to memory.
>
> Dear Eric,
>
> I’m quoting your more detailed explanation from the other patch [0], thank
> you for that!
>
> However, regarding new->next, if the new object is allocated with
> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
> rculist_nulls”)?
>
> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?

I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.

It is a bit weird to annotate the writes, but not the lockless reads,
even if apparently KCSAN
is okay with that.


>
> [0]: https://lore.kernel.org/all/CANn89iKQM=4wjCLxpg-m3jYoUm=rsSk68xVLN2902di2+FkSFg@mail.gmail.com/
>
> Thanks!
>
> >> +       rcu_assign_pointer(hlist_nulls_pprev_rcu(new), new);
> >> +       if (!is_a_nulls(next))
> >> +               WRITE_ONCE(next->pprev, &new->next);
> >> +}
> >> +
> >> +/**
> >> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and
> >> + * initialize the old
> >> + * @old: the element to be replaced
> >> + * @new: the new element to insert
> >> + *
> >> + * Description:
> >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >> + * permitting racing traversals, and reinitialize the old entry.
> >> + *
> >> + * Note: @old must be hashed.
> >> + *
> >> + * The caller must take whatever precautions are necessary (such as holding
> >> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >> + * list. However, it is perfectly legal to run concurrently with the _rcu
> >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >> + */
> >> +static inline void hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old,
> >> +                                               struct hlist_nulls_node *new)
> >> +{
> >> +       hlist_nulls_replace_rcu(old, new);
> >> +       WRITE_ONCE(old->pprev, NULL);
> >> +}
> >> +
> >>   /**
> >>    * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
> >>    * @tpos:      the type * to use as a loop cursor.
> >> --
> >> 2.25.1
> >>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13  7:04         ` luoxuanqiang
@ 2025-10-13 12:08           ` Simon Horman
  2025-10-14  2:29             ` luoxuanqiang
  0 siblings, 1 reply; 24+ messages in thread
From: Simon Horman @ 2025-10-13 12:08 UTC (permalink / raw)
  To: luoxuanqiang
  Cc: Jason Xing, Jiayuan Chen, Paolo Abeni, kuba, edumazet, davem,
	kuniyu, Paul E. McKenney, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay

On Mon, Oct 13, 2025 at 03:04:34PM +0800, luoxuanqiang wrote:
> 
> 在 2025/10/13 14:26, Jason Xing 写道:
> > On Mon, Oct 13, 2025 at 1:36 PM Jiayuan Chen <jiayuan.chen@linux.dev> wrote:
> > > On Tue, Sep 30, 2025 at 11:16:00AM +0800, Paolo Abeni wrote:
> > > > On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
> > > > > From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> > > > > 
> > > > > Add two functions to atomically replace RCU-protected hlist_nulls entries.
> > > [...]
> > > > > Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> > > > This deserves explicit ack from RCU maintainers.
> > > > 
> > > > Since we are finalizing the net-next PR, I suggest to defer this series
> > > > to the next cycle, to avoid rushing such request.
> > > > 
> > > > Thanks,
> > > > 
> > > > Paolo
> > > Hi maintainers,
> > > 
> > > This patch was previously held off due to the merge window.
> > > 
> > > Now that the merge net-next has open and no further changes are required,
> > > could we please consider merging it directly?
> > > 
> > > Apologies for the slight push, but I'm hoping we can get a formal
> > > commit backported to our production branch.
> > I suppose a new version that needs to be rebased is necessary.
> > 
> > Thanks,
> > Jason
> 
> I’ve rebased the series of patches onto the latest codebase locally and
> didn’t encounter any errors.
> 
> If there’s anything else I can do to help get these patches merged, just
> let me know.

Hi,

The patch-set has been marked as "Deffered" in Patchwork.
Presumably by Paolo in conjunction with his response above.
As such the patch-set needs to be (rebased and) reposted in
order for it to be considered by the maintainers again.

I think the best practice is for this to happen _after_ one
of the maintainers has sent an "ANN" email announcing that
net-next has re-opened. I don't believe that has happened yet.

Thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13 12:08           ` Simon Horman
@ 2025-10-14  2:29             ` luoxuanqiang
  0 siblings, 0 replies; 24+ messages in thread
From: luoxuanqiang @ 2025-10-14  2:29 UTC (permalink / raw)
  To: Simon Horman
  Cc: Jason Xing, Jiayuan Chen, Paolo Abeni, kuba, edumazet, davem,
	kuniyu, Paul E. McKenney, netdev, Xuanqiang Luo,
	Frederic Weisbecker, Neeraj Upadhyay


在 2025/10/13 20:08, Simon Horman 写道:
> On Mon, Oct 13, 2025 at 03:04:34PM +0800, luoxuanqiang wrote:
>> 在 2025/10/13 14:26, Jason Xing 写道:
>>> On Mon, Oct 13, 2025 at 1:36 PM Jiayuan Chen <jiayuan.chen@linux.dev> wrote:
>>>> On Tue, Sep 30, 2025 at 11:16:00AM +0800, Paolo Abeni wrote:
>>>>> On 9/26/25 9:40 AM, xuanqiang.luo@linux.dev wrote:
>>>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>>
>>>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>>> [...]
>>>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>> This deserves explicit ack from RCU maintainers.
>>>>>
>>>>> Since we are finalizing the net-next PR, I suggest to defer this series
>>>>> to the next cycle, to avoid rushing such request.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Paolo
>>>> Hi maintainers,
>>>>
>>>> This patch was previously held off due to the merge window.
>>>>
>>>> Now that the merge net-next has open and no further changes are required,
>>>> could we please consider merging it directly?
>>>>
>>>> Apologies for the slight push, but I'm hoping we can get a formal
>>>> commit backported to our production branch.
>>> I suppose a new version that needs to be rebased is necessary.
>>>
>>> Thanks,
>>> Jason
>> I’ve rebased the series of patches onto the latest codebase locally and
>> didn’t encounter any errors.
>>
>> If there’s anything else I can do to help get these patches merged, just
>> let me know.
> Hi,
>
> The patch-set has been marked as "Deffered" in Patchwork.
> Presumably by Paolo in conjunction with his response above.
> As such the patch-set needs to be (rebased and) reposted in
> order for it to be considered by the maintainers again.
>
> I think the best practice is for this to happen _after_ one
> of the maintainers has sent an "ANN" email announcing that
> net-next has re-opened. I don't believe that has happened yet.
>
> Thanks!

Dear Simon,

Thanks for the detailed explanation. I think we now know how
to handle this kind of situation.

net‑next has been reopened, and I have rebased and resent
the patch-set.

Thanks!


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-13  9:49       ` Eric Dumazet
@ 2025-10-14  7:20         ` luoxuanqiang
  2025-10-14  7:34           ` Eric Dumazet
  0 siblings, 1 reply; 24+ messages in thread
From: luoxuanqiang @ 2025-10-14  7:20 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay


在 2025/10/13 17:49, Eric Dumazet 写道:
> On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>
>> 在 2025/10/13 15:31, Eric Dumazet 写道:
>>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>
>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>>>
>>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
>>>> mentioned in the patch below:
>>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
>>>> rculist_nulls")
>>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>> hlist_nulls")
>>>>
>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>> ---
>>>>    include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
>>>>    1 file changed, 59 insertions(+)
>>>>
>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
>>>> index 89186c499dd4..c26cb83ca071 100644
>>>> --- a/include/linux/rculist_nulls.h
>>>> +++ b/include/linux/rculist_nulls.h
>>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
>>>>    #define hlist_nulls_next_rcu(node) \
>>>>           (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
>>>>
>>>> +/**
>>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
>>>> + * @node: element of the list.
>>>> + */
>>>> +#define hlist_nulls_pprev_rcu(node) \
>>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
>>>> +
>>>>    /**
>>>>     * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
>>>>     * @n: the element to delete from the hash list.
>>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>>>>           n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>>>>    }
>>>>
>>>> +/**
>>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
>>>> + * @old: the element to be replaced
>>>> + * @new: the new element to insert
>>>> + *
>>>> + * Description:
>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>>>> + * permitting racing traversals.
>>>> + *
>>>> + * The caller must take whatever precautions are necessary (such as holding
>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>>>> + */
>>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
>>>> +                                          struct hlist_nulls_node *new)
>>>> +{
>>>> +       struct hlist_nulls_node *next = old->next;
>>>> +
>>>> +       WRITE_ONCE(new->next, next);
>>>> +       WRITE_ONCE(new->pprev, old->pprev);
>>> I do not think these two WRITE_ONCE() are needed.
>>>
>>> At this point new is not yet visible.
>>>
>>> The following  rcu_assign_pointer() is enough to make sure prior
>>> writes are committed to memory.
>> Dear Eric,
>>
>> I’m quoting your more detailed explanation from the other patch [0], thank
>> you for that!
>>
>> However, regarding new->next, if the new object is allocated with
>> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
>> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
>> rculist_nulls”)?
>>
>> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
>> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
>> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
> I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
>
> It is a bit weird to annotate the writes, but not the lockless reads,
> even if apparently KCSAN
> is okay with that.
>
Dear Eric,

I’m sorry—I still haven’t fully grasped the scenario you mentioned where
sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
similar to the race described in commit 860c8802ace1 (“rcu: Use
WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].

Two CPUs invoke inet_unhash() from the tcp_retransmit_timer() path on the
same sk, causing a race even though tcp_retransmit_timer() checks
lockdep_sock_is_held(sk).

How does this race happen? I can’t find more details to understand the
situation, so any hints would be greatly appreciated.

My simple understanding is that hlist_nulls_replace_rcu() might have the
same call path as hlist_nulls_add_head_rcu(), so I keep using WRITE_ONCE().

Finally, Kuniyuki Iwashima also raised a similar discussion in the v3
series; here’s the link [1].

[0]:
------------------------------------------------------------------------

BUG: KCSAN: data-race in inet_unhash / inet_unhash

write to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 1:
  __hlist_nulls_del include/linux/list_nulls.h:88 [inline]
  hlist_nulls_del_init_rcu include/linux/rculist_nulls.h:36 [inline]
  __sk_nulls_del_node_init_rcu include/net/sock.h:676 [inline]
  inet_unhash+0x38f/0x4a0 net/ipv4/inet_hashtables.c:612
  tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
  tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
  tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
  tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479
  tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
  tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
  call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
  expire_timers kernel/time/timer.c:1449 [inline]
  __run_timers kernel/time/timer.c:1773 [inline]
  __run_timers kernel/time/timer.c:1740 [inline]
  run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
  __do_softirq+0x115/0x33f kernel/softirq.c:292
  invoke_softirq kernel/softirq.c:373 [inline]
  irq_exit+0xbb/0xe0 kernel/softirq.c:413
  exiting_irq arch/x86/include/asm/apic.h:536 [inline]
  smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
  apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
  native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
  arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
  default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
  cpuidle_idle_call kernel/sched/idle.c:154 [inline]
  do_idle+0x1af/0x280 kernel/sched/idle.c:263
  cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
  start_secondary+0x208/0x260 arch/x86/kernel/smpboot.c:264
  secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241

read to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 0:
  sk_unhashed include/net/sock.h:607 [inline]
  inet_unhash+0x3d/0x4a0 net/ipv4/inet_hashtables.c:592
  tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
  tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
  tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
  tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479
  tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
  tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
  call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
  expire_timers kernel/time/timer.c:1449 [inline]
  __run_timers kernel/time/timer.c:1773 [inline]
  __run_timers kernel/time/timer.c:1740 [inline]
  run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
  __do_softirq+0x115/0x33f kernel/softirq.c:292
  invoke_softirq kernel/softirq.c:373 [inline]
  irq_exit+0xbb/0xe0 kernel/softirq.c:413
  exiting_irq arch/x86/include/asm/apic.h:536 [inline]
  smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
  apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
  native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
  arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
  default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
  cpuidle_idle_call kernel/sched/idle.c:154 [inline]
  do_idle+0x1af/0x280 kernel/sched/idle.c:263
  cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
  rest_init+0xec/0xf6 init/main.c:452
  arch_call_rest_init+0x17/0x37
  start_kernel+0x838/0x85e init/main.c:786
  x86_64_start_reservations+0x29/0x2b arch/x86/kernel/head64.c:490
  x86_64_start_kernel+0x72/0x76 arch/x86/kernel/head64.c:471
  secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.0-rc6+ #0
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011

------------------------------------------------------------------------

[1]: https://lore.kernel.org/all/CAAVpQUCoCizxTm6wRs0+n6_kPK+kgxwszsYKNds3YvuBfBvrhg@mail.gmail.com/

Thanks!


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-14  7:20         ` luoxuanqiang
@ 2025-10-14  7:34           ` Eric Dumazet
  2025-10-14  8:04             ` luoxuanqiang
  0 siblings, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2025-10-14  7:34 UTC (permalink / raw)
  To: luoxuanqiang
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Tue, Oct 14, 2025 at 12:21 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>
>
> 在 2025/10/13 17:49, Eric Dumazet 写道:
> > On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
> >>
> >> 在 2025/10/13 15:31, Eric Dumazet 写道:
> >>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
> >>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>>>
> >>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> >>>>
> >>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> >>>> mentioned in the patch below:
> >>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> >>>> rculist_nulls")
> >>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >>>> hlist_nulls")
> >>>>
> >>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>>> ---
> >>>>    include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
> >>>>    1 file changed, 59 insertions(+)
> >>>>
> >>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> >>>> index 89186c499dd4..c26cb83ca071 100644
> >>>> --- a/include/linux/rculist_nulls.h
> >>>> +++ b/include/linux/rculist_nulls.h
> >>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
> >>>>    #define hlist_nulls_next_rcu(node) \
> >>>>           (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
> >>>>
> >>>> +/**
> >>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
> >>>> + * @node: element of the list.
> >>>> + */
> >>>> +#define hlist_nulls_pprev_rcu(node) \
> >>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
> >>>> +
> >>>>    /**
> >>>>     * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
> >>>>     * @n: the element to delete from the hash list.
> >>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
> >>>>           n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
> >>>>    }
> >>>>
> >>>> +/**
> >>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
> >>>> + * @old: the element to be replaced
> >>>> + * @new: the new element to insert
> >>>> + *
> >>>> + * Description:
> >>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >>>> + * permitting racing traversals.
> >>>> + *
> >>>> + * The caller must take whatever precautions are necessary (such as holding
> >>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
> >>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >>>> + */
> >>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> >>>> +                                          struct hlist_nulls_node *new)
> >>>> +{
> >>>> +       struct hlist_nulls_node *next = old->next;
> >>>> +
> >>>> +       WRITE_ONCE(new->next, next);
> >>>> +       WRITE_ONCE(new->pprev, old->pprev);
> >>> I do not think these two WRITE_ONCE() are needed.
> >>>
> >>> At this point new is not yet visible.
> >>>
> >>> The following  rcu_assign_pointer() is enough to make sure prior
> >>> writes are committed to memory.
> >> Dear Eric,
> >>
> >> I’m quoting your more detailed explanation from the other patch [0], thank
> >> you for that!
> >>
> >> However, regarding new->next, if the new object is allocated with
> >> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
> >> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
> >> rculist_nulls”)?
> >>
> >> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
> >> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
> > I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
> >
> > It is a bit weird to annotate the writes, but not the lockless reads,
> > even if apparently KCSAN
> > is okay with that.
> >
> Dear Eric,
>
> I’m sorry—I still haven’t fully grasped the scenario you mentioned where
> sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
> similar to the race described in commit 860c8802ace1 (“rcu: Use
> WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].
>

inet_unhash() does a lockless sk_unhash(sk) call, while no lock is
held in some cases (look at tcp_done())

void inet_unhash(struct sock *sk)
{
struct inet_hashinfo *hashinfo = tcp_get_hashinfo(sk);

if (sk_unhashed(sk))    // Here no lock is held
    return;

Relevant lock (depending on (sk->sk_state == TCP_LISTEN)) is acquired
a few lines later.

Then

__sk_nulls_del_node_init_rcu() is called safely, while the bucket lock is held.




> Two CPUs invoke inet_unhash() from the tcp_retransmit_timer() path on the
> same sk, causing a race even though tcp_retransmit_timer() checks
> lockdep_sock_is_held(sk).
>
> How does this race happen? I can’t find more details to understand the
> situation, so any hints would be greatly appreciated.
>
> My simple understanding is that hlist_nulls_replace_rcu() might have the
> same call path as hlist_nulls_add_head_rcu(), so I keep using WRITE_ONCE().
>
> Finally, Kuniyuki Iwashima also raised a similar discussion in the v3
> series; here’s the link [1].
>
> [0]:
> ------------------------------------------------------------------------
>
> BUG: KCSAN: data-race in inet_unhash / inet_unhash
>
> write to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 1:
>   __hlist_nulls_del include/linux/list_nulls.h:88 [inline]
>   hlist_nulls_del_init_rcu include/linux/rculist_nulls.h:36 [inline]
>   __sk_nulls_del_node_init_rcu include/net/sock.h:676 [inline]
>   inet_unhash+0x38f/0x4a0 net/ipv4/inet_hashtables.c:612
>   tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
>   tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
>   tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
>   tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479
>   tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
>   tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
>   call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
>   expire_timers kernel/time/timer.c:1449 [inline]
>   __run_timers kernel/time/timer.c:1773 [inline]
>   __run_timers kernel/time/timer.c:1740 [inline]
>   run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
>   __do_softirq+0x115/0x33f kernel/softirq.c:292
>   invoke_softirq kernel/softirq.c:373 [inline]
>   irq_exit+0xbb/0xe0 kernel/softirq.c:413
>   exiting_irq arch/x86/include/asm/apic.h:536 [inline]
>   smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
>   apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
>   native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
>   arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
>   default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
>   cpuidle_idle_call kernel/sched/idle.c:154 [inline]
>   do_idle+0x1af/0x280 kernel/sched/idle.c:263
>   cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
>   start_secondary+0x208/0x260 arch/x86/kernel/smpboot.c:264
>   secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241
>
> read to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 0:
>   sk_unhashed include/net/sock.h:607 [inline]
>   inet_unhash+0x3d/0x4a0 net/ipv4/inet_hashtables.c:592
>   tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
>   tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
>   tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
>   tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479
>   tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
>   tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
>   call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
>   expire_timers kernel/time/timer.c:1449 [inline]
>   __run_timers kernel/time/timer.c:1773 [inline]
>   __run_timers kernel/time/timer.c:1740 [inline]
>   run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
>   __do_softirq+0x115/0x33f kernel/softirq.c:292
>   invoke_softirq kernel/softirq.c:373 [inline]
>   irq_exit+0xbb/0xe0 kernel/softirq.c:413
>   exiting_irq arch/x86/include/asm/apic.h:536 [inline]
>   smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
>   apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
>   native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
>   arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
>   default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
>   cpuidle_idle_call kernel/sched/idle.c:154 [inline]
>   do_idle+0x1af/0x280 kernel/sched/idle.c:263
>   cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
>   rest_init+0xec/0xf6 init/main.c:452
>   arch_call_rest_init+0x17/0x37
>   start_kernel+0x838/0x85e init/main.c:786
>   x86_64_start_reservations+0x29/0x2b arch/x86/kernel/head64.c:490
>   x86_64_start_kernel+0x72/0x76 arch/x86/kernel/head64.c:471
>   secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241
>
> Reported by Kernel Concurrency Sanitizer on:
> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.0-rc6+ #0
> Hardware name: Google Google Compute Engine/Google Compute Engine,
> BIOS Google 01/01/2011
>
> ------------------------------------------------------------------------
>
> [1]: https://lore.kernel.org/all/CAAVpQUCoCizxTm6wRs0+n6_kPK+kgxwszsYKNds3YvuBfBvrhg@mail.gmail.com/
>
> Thanks!
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-14  7:34           ` Eric Dumazet
@ 2025-10-14  8:04             ` luoxuanqiang
  2025-10-14  8:09               ` Eric Dumazet
  0 siblings, 1 reply; 24+ messages in thread
From: luoxuanqiang @ 2025-10-14  8:04 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay


在 2025/10/14 15:34, Eric Dumazet 写道:
> On Tue, Oct 14, 2025 at 12:21 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>
>> 在 2025/10/13 17:49, Eric Dumazet 写道:
>>> On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>>> 在 2025/10/13 15:31, Eric Dumazet 写道:
>>>>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>>>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>>
>>>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>>>>>
>>>>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
>>>>>> mentioned in the patch below:
>>>>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
>>>>>> rculist_nulls")
>>>>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>>>> hlist_nulls")
>>>>>>
>>>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>> ---
>>>>>>     include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
>>>>>>     1 file changed, 59 insertions(+)
>>>>>>
>>>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
>>>>>> index 89186c499dd4..c26cb83ca071 100644
>>>>>> --- a/include/linux/rculist_nulls.h
>>>>>> +++ b/include/linux/rculist_nulls.h
>>>>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
>>>>>>     #define hlist_nulls_next_rcu(node) \
>>>>>>            (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
>>>>>>
>>>>>> +/**
>>>>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
>>>>>> + * @node: element of the list.
>>>>>> + */
>>>>>> +#define hlist_nulls_pprev_rcu(node) \
>>>>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
>>>>>> +
>>>>>>     /**
>>>>>>      * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
>>>>>>      * @n: the element to delete from the hash list.
>>>>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>>>>>>            n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>>>>>>     }
>>>>>>
>>>>>> +/**
>>>>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
>>>>>> + * @old: the element to be replaced
>>>>>> + * @new: the new element to insert
>>>>>> + *
>>>>>> + * Description:
>>>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>>>>>> + * permitting racing traversals.
>>>>>> + *
>>>>>> + * The caller must take whatever precautions are necessary (such as holding
>>>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>>>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>>>>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
>>>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>>>>>> + */
>>>>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
>>>>>> +                                          struct hlist_nulls_node *new)
>>>>>> +{
>>>>>> +       struct hlist_nulls_node *next = old->next;
>>>>>> +
>>>>>> +       WRITE_ONCE(new->next, next);
>>>>>> +       WRITE_ONCE(new->pprev, old->pprev);
>>>>> I do not think these two WRITE_ONCE() are needed.
>>>>>
>>>>> At this point new is not yet visible.
>>>>>
>>>>> The following  rcu_assign_pointer() is enough to make sure prior
>>>>> writes are committed to memory.
>>>> Dear Eric,
>>>>
>>>> I’m quoting your more detailed explanation from the other patch [0], thank
>>>> you for that!
>>>>
>>>> However, regarding new->next, if the new object is allocated with
>>>> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
>>>> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
>>>> rculist_nulls”)?
>>>>
>>>> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
>>>> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
>>> I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
>>>
>>> It is a bit weird to annotate the writes, but not the lockless reads,
>>> even if apparently KCSAN
>>> is okay with that.
>>>
>> Dear Eric,
>>
>> I’m sorry—I still haven’t fully grasped the scenario you mentioned where
>> sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
>> similar to the race described in commit 860c8802ace1 (“rcu: Use
>> WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].
>>
> inet_unhash() does a lockless sk_unhash(sk) call, while no lock is
> held in some cases (look at tcp_done())
>
> void inet_unhash(struct sock *sk)
> {
> struct inet_hashinfo *hashinfo = tcp_get_hashinfo(sk);
>
> if (sk_unhashed(sk))    // Here no lock is held
>      return;
>
> Relevant lock (depending on (sk->sk_state == TCP_LISTEN)) is acquired
> a few lines later.
>
> Then
>
> __sk_nulls_del_node_init_rcu() is called safely, while the bucket lock is held.
>
Dear Eric,

Thanks for the quick response!

In the call path:
         tcp_retransmit_timer()
                 tcp_write_err()
                         tcp_done()

tcp_retransmit_timer() already calls lockdep_sock_is_held(sk) to check the
socket‑lock state.

void tcp_retransmit_timer(struct sock *sk)
{
         struct tcp_sock *tp = tcp_sk(sk);
         struct net *net = sock_net(sk);
         struct inet_connection_sock *icsk = inet_csk(sk);
         struct request_sock *req;
         struct sk_buff *skb;

         req = rcu_dereference_protected(tp->fastopen_rsk,
                                  lockdep_sock_is_held(sk)); // Check here

Does that mean we’re already protected by lock_sock(sk) or
bh_lock_sock(sk)?

Thanks!

>
>
>> Two CPUs invoke inet_unhash() from the tcp_retransmit_timer() path on the
>> same sk, causing a race even though tcp_retransmit_timer() checks
>> lockdep_sock_is_held(sk).
>>
>> How does this race happen? I can’t find more details to understand the
>> situation, so any hints would be greatly appreciated.
>>
>> My simple understanding is that hlist_nulls_replace_rcu() might have the
>> same call path as hlist_nulls_add_head_rcu(), so I keep using WRITE_ONCE().
>>
>> Finally, Kuniyuki Iwashima also raised a similar discussion in the v3
>> series; here’s the link [1].
>>
>> [0]:
>> ------------------------------------------------------------------------
>>
>> BUG: KCSAN: data-race in inet_unhash / inet_unhash
>>
>> write to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 1:
>>    __hlist_nulls_del include/linux/list_nulls.h:88 [inline]
>>    hlist_nulls_del_init_rcu include/linux/rculist_nulls.h:36 [inline]
>>    __sk_nulls_del_node_init_rcu include/net/sock.h:676 [inline]
>>    inet_unhash+0x38f/0x4a0 net/ipv4/inet_hashtables.c:612
>>    tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
>>    tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
>>    tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
>>    tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479
>>    tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
>>    tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
>>    call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
>>    expire_timers kernel/time/timer.c:1449 [inline]
>>    __run_timers kernel/time/timer.c:1773 [inline]
>>    __run_timers kernel/time/timer.c:1740 [inline]
>>    run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
>>    __do_softirq+0x115/0x33f kernel/softirq.c:292
>>    invoke_softirq kernel/softirq.c:373 [inline]
>>    irq_exit+0xbb/0xe0 kernel/softirq.c:413
>>    exiting_irq arch/x86/include/asm/apic.h:536 [inline]
>>    smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
>>    apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
>>    native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
>>    arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
>>    default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
>>    cpuidle_idle_call kernel/sched/idle.c:154 [inline]
>>    do_idle+0x1af/0x280 kernel/sched/idle.c:263
>>    cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
>>    start_secondary+0x208/0x260 arch/x86/kernel/smpboot.c:264
>>    secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241
>>
>> read to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 0:
>>    sk_unhashed include/net/sock.h:607 [inline]
>>    inet_unhash+0x3d/0x4a0 net/ipv4/inet_hashtables.c:592
>>    tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
>>    tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
>>    tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
>>    tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479
>>    tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
>>    tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
>>    call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
>>    expire_timers kernel/time/timer.c:1449 [inline]
>>    __run_timers kernel/time/timer.c:1773 [inline]
>>    __run_timers kernel/time/timer.c:1740 [inline]
>>    run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
>>    __do_softirq+0x115/0x33f kernel/softirq.c:292
>>    invoke_softirq kernel/softirq.c:373 [inline]
>>    irq_exit+0xbb/0xe0 kernel/softirq.c:413
>>    exiting_irq arch/x86/include/asm/apic.h:536 [inline]
>>    smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
>>    apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
>>    native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
>>    arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
>>    default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
>>    cpuidle_idle_call kernel/sched/idle.c:154 [inline]
>>    do_idle+0x1af/0x280 kernel/sched/idle.c:263
>>    cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
>>    rest_init+0xec/0xf6 init/main.c:452
>>    arch_call_rest_init+0x17/0x37
>>    start_kernel+0x838/0x85e init/main.c:786
>>    x86_64_start_reservations+0x29/0x2b arch/x86/kernel/head64.c:490
>>    x86_64_start_kernel+0x72/0x76 arch/x86/kernel/head64.c:471
>>    secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241
>>
>> Reported by Kernel Concurrency Sanitizer on:
>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.0-rc6+ #0
>> Hardware name: Google Google Compute Engine/Google Compute Engine,
>> BIOS Google 01/01/2011
>>
>> ------------------------------------------------------------------------
>>
>> [1]: https://lore.kernel.org/all/CAAVpQUCoCizxTm6wRs0+n6_kPK+kgxwszsYKNds3YvuBfBvrhg@mail.gmail.com/
>>
>> Thanks!
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-14  8:04             ` luoxuanqiang
@ 2025-10-14  8:09               ` Eric Dumazet
  2025-10-14  8:40                 ` luoxuanqiang
  0 siblings, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2025-10-14  8:09 UTC (permalink / raw)
  To: luoxuanqiang
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Tue, Oct 14, 2025 at 1:05 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>
>
> 在 2025/10/14 15:34, Eric Dumazet 写道:
> > On Tue, Oct 14, 2025 at 12:21 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
> >>
> >> 在 2025/10/13 17:49, Eric Dumazet 写道:
> >>> On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
> >>>> 在 2025/10/13 15:31, Eric Dumazet 写道:
> >>>>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
> >>>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>>>>>
> >>>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> >>>>>>
> >>>>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> >>>>>> mentioned in the patch below:
> >>>>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> >>>>>> rculist_nulls")
> >>>>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >>>>>> hlist_nulls")
> >>>>>>
> >>>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>>>>> ---
> >>>>>>     include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
> >>>>>>     1 file changed, 59 insertions(+)
> >>>>>>
> >>>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> >>>>>> index 89186c499dd4..c26cb83ca071 100644
> >>>>>> --- a/include/linux/rculist_nulls.h
> >>>>>> +++ b/include/linux/rculist_nulls.h
> >>>>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
> >>>>>>     #define hlist_nulls_next_rcu(node) \
> >>>>>>            (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
> >>>>>>
> >>>>>> +/**
> >>>>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
> >>>>>> + * @node: element of the list.
> >>>>>> + */
> >>>>>> +#define hlist_nulls_pprev_rcu(node) \
> >>>>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
> >>>>>> +
> >>>>>>     /**
> >>>>>>      * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
> >>>>>>      * @n: the element to delete from the hash list.
> >>>>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
> >>>>>>            n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
> >>>>>>     }
> >>>>>>
> >>>>>> +/**
> >>>>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
> >>>>>> + * @old: the element to be replaced
> >>>>>> + * @new: the new element to insert
> >>>>>> + *
> >>>>>> + * Description:
> >>>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >>>>>> + * permitting racing traversals.
> >>>>>> + *
> >>>>>> + * The caller must take whatever precautions are necessary (such as holding
> >>>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >>>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >>>>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
> >>>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >>>>>> + */
> >>>>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> >>>>>> +                                          struct hlist_nulls_node *new)
> >>>>>> +{
> >>>>>> +       struct hlist_nulls_node *next = old->next;
> >>>>>> +
> >>>>>> +       WRITE_ONCE(new->next, next);
> >>>>>> +       WRITE_ONCE(new->pprev, old->pprev);
> >>>>> I do not think these two WRITE_ONCE() are needed.
> >>>>>
> >>>>> At this point new is not yet visible.
> >>>>>
> >>>>> The following  rcu_assign_pointer() is enough to make sure prior
> >>>>> writes are committed to memory.
> >>>> Dear Eric,
> >>>>
> >>>> I’m quoting your more detailed explanation from the other patch [0], thank
> >>>> you for that!
> >>>>
> >>>> However, regarding new->next, if the new object is allocated with
> >>>> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
> >>>> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
> >>>> rculist_nulls”)?
> >>>>
> >>>> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
> >>>> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >>>> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
> >>> I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
> >>>
> >>> It is a bit weird to annotate the writes, but not the lockless reads,
> >>> even if apparently KCSAN
> >>> is okay with that.
> >>>
> >> Dear Eric,
> >>
> >> I’m sorry—I still haven’t fully grasped the scenario you mentioned where
> >> sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
> >> similar to the race described in commit 860c8802ace1 (“rcu: Use
> >> WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].
> >>
> > inet_unhash() does a lockless sk_unhash(sk) call, while no lock is
> > held in some cases (look at tcp_done())
> >
> > void inet_unhash(struct sock *sk)
> > {
> > struct inet_hashinfo *hashinfo = tcp_get_hashinfo(sk);
> >
> > if (sk_unhashed(sk))    // Here no lock is held
> >      return;
> >
> > Relevant lock (depending on (sk->sk_state == TCP_LISTEN)) is acquired
> > a few lines later.
> >
> > Then
> >
> > __sk_nulls_del_node_init_rcu() is called safely, while the bucket lock is held.
> >
> Dear Eric,
>
> Thanks for the quick response!
>
> In the call path:
>          tcp_retransmit_timer()
>                  tcp_write_err()
>                          tcp_done()
>
> tcp_retransmit_timer() already calls lockdep_sock_is_held(sk) to check the
> socket‑lock state.
>
> void tcp_retransmit_timer(struct sock *sk)
> {
>          struct tcp_sock *tp = tcp_sk(sk);
>          struct net *net = sock_net(sk);
>          struct inet_connection_sock *icsk = inet_csk(sk);
>          struct request_sock *req;
>          struct sk_buff *skb;
>
>          req = rcu_dereference_protected(tp->fastopen_rsk,
>                                   lockdep_sock_is_held(sk)); // Check here
>
> Does that mean we’re already protected by lock_sock(sk) or
> bh_lock_sock(sk)?

But the socket lock is not protecting ehash buckets. These are other locks.

Also, inet_unhash() can be called from other paths, without a socket
lock being held.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-14  8:09               ` Eric Dumazet
@ 2025-10-14  8:40                 ` luoxuanqiang
  2025-10-14 10:02                   ` Eric Dumazet
  0 siblings, 1 reply; 24+ messages in thread
From: luoxuanqiang @ 2025-10-14  8:40 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay


在 2025/10/14 16:09, Eric Dumazet 写道:
> On Tue, Oct 14, 2025 at 1:05 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>
>> 在 2025/10/14 15:34, Eric Dumazet 写道:
>>> On Tue, Oct 14, 2025 at 12:21 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>>> 在 2025/10/13 17:49, Eric Dumazet 写道:
>>>>> On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>>>>> 在 2025/10/13 15:31, Eric Dumazet 写道:
>>>>>>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>>>>>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>>>>
>>>>>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>>>>>>>
>>>>>>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
>>>>>>>> mentioned in the patch below:
>>>>>>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
>>>>>>>> rculist_nulls")
>>>>>>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>>>>>> hlist_nulls")
>>>>>>>>
>>>>>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>>>> ---
>>>>>>>>      include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
>>>>>>>>      1 file changed, 59 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
>>>>>>>> index 89186c499dd4..c26cb83ca071 100644
>>>>>>>> --- a/include/linux/rculist_nulls.h
>>>>>>>> +++ b/include/linux/rculist_nulls.h
>>>>>>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
>>>>>>>>      #define hlist_nulls_next_rcu(node) \
>>>>>>>>             (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
>>>>>>>>
>>>>>>>> +/**
>>>>>>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
>>>>>>>> + * @node: element of the list.
>>>>>>>> + */
>>>>>>>> +#define hlist_nulls_pprev_rcu(node) \
>>>>>>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
>>>>>>>> +
>>>>>>>>      /**
>>>>>>>>       * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
>>>>>>>>       * @n: the element to delete from the hash list.
>>>>>>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>>>>>>>>             n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>>>>>>>>      }
>>>>>>>>
>>>>>>>> +/**
>>>>>>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
>>>>>>>> + * @old: the element to be replaced
>>>>>>>> + * @new: the new element to insert
>>>>>>>> + *
>>>>>>>> + * Description:
>>>>>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>>>>>>>> + * permitting racing traversals.
>>>>>>>> + *
>>>>>>>> + * The caller must take whatever precautions are necessary (such as holding
>>>>>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>>>>>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>>>>>>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
>>>>>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>>>>>>>> + */
>>>>>>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
>>>>>>>> +                                          struct hlist_nulls_node *new)
>>>>>>>> +{
>>>>>>>> +       struct hlist_nulls_node *next = old->next;
>>>>>>>> +
>>>>>>>> +       WRITE_ONCE(new->next, next);
>>>>>>>> +       WRITE_ONCE(new->pprev, old->pprev);
>>>>>>> I do not think these two WRITE_ONCE() are needed.
>>>>>>>
>>>>>>> At this point new is not yet visible.
>>>>>>>
>>>>>>> The following  rcu_assign_pointer() is enough to make sure prior
>>>>>>> writes are committed to memory.
>>>>>> Dear Eric,
>>>>>>
>>>>>> I’m quoting your more detailed explanation from the other patch [0], thank
>>>>>> you for that!
>>>>>>
>>>>>> However, regarding new->next, if the new object is allocated with
>>>>>> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
>>>>>> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
>>>>>> rculist_nulls”)?
>>>>>>
>>>>>> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
>>>>>> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>>>> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
>>>>> I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
>>>>>
>>>>> It is a bit weird to annotate the writes, but not the lockless reads,
>>>>> even if apparently KCSAN
>>>>> is okay with that.
>>>>>
>>>> Dear Eric,
>>>>
>>>> I’m sorry—I still haven’t fully grasped the scenario you mentioned where
>>>> sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
>>>> similar to the race described in commit 860c8802ace1 (“rcu: Use
>>>> WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].
>>>>
>>> inet_unhash() does a lockless sk_unhash(sk) call, while no lock is
>>> held in some cases (look at tcp_done())
>>>
>>> void inet_unhash(struct sock *sk)
>>> {
>>> struct inet_hashinfo *hashinfo = tcp_get_hashinfo(sk);
>>>
>>> if (sk_unhashed(sk))    // Here no lock is held
>>>       return;
>>>
>>> Relevant lock (depending on (sk->sk_state == TCP_LISTEN)) is acquired
>>> a few lines later.
>>>
>>> Then
>>>
>>> __sk_nulls_del_node_init_rcu() is called safely, while the bucket lock is held.
>>>
>> Dear Eric,
>>
>> Thanks for the quick response!
>>
>> In the call path:
>>           tcp_retransmit_timer()
>>                   tcp_write_err()
>>                           tcp_done()
>>
>> tcp_retransmit_timer() already calls lockdep_sock_is_held(sk) to check the
>> socket‑lock state.
>>
>> void tcp_retransmit_timer(struct sock *sk)
>> {
>>           struct tcp_sock *tp = tcp_sk(sk);
>>           struct net *net = sock_net(sk);
>>           struct inet_connection_sock *icsk = inet_csk(sk);
>>           struct request_sock *req;
>>           struct sk_buff *skb;
>>
>>           req = rcu_dereference_protected(tp->fastopen_rsk,
>>                                    lockdep_sock_is_held(sk)); // Check here
>>
>> Does that mean we’re already protected by lock_sock(sk) or
>> bh_lock_sock(sk)?
> But the socket lock is not protecting ehash buckets. These are other locks.
>
> Also, inet_unhash() can be called from other paths, without a socket
> lock being held.

Dear Eric,

I understand the distinction now, but looking at the call stack in [0],
both CPUs reach inet_unhash() via the tcp_retransmit_timer() path, so only
one of them should pass the check, right?

I’m still not clear how this race condition arises.

After encountering the issue, we used WRITE_ONCE() to assign n->pprev in
hlist_nulls_add_head_rcu().

Thank you very much for taking the time to discuss this with me!

Thanks!

[0]:
------------------------------------------------------------------------

BUG: KCSAN: data-race in inet_unhash / inet_unhash

write to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 1:
  __hlist_nulls_del include/linux/list_nulls.h:88 [inline]
  hlist_nulls_del_init_rcu include/linux/rculist_nulls.h:36 [inline]
  __sk_nulls_del_node_init_rcu include/net/sock.h:676 [inline]
  inet_unhash+0x38f/0x4a0 net/ipv4/inet_hashtables.c:612
  tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
  tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
  tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
  tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479	<===
  tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
  tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
  call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
  expire_timers kernel/time/timer.c:1449 [inline]
  __run_timers kernel/time/timer.c:1773 [inline]
  __run_timers kernel/time/timer.c:1740 [inline]
  run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
  __do_softirq+0x115/0x33f kernel/softirq.c:292
  invoke_softirq kernel/softirq.c:373 [inline]
  irq_exit+0xbb/0xe0 kernel/softirq.c:413
  exiting_irq arch/x86/include/asm/apic.h:536 [inline]
  smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
  apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
  native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
  arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
  default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
  cpuidle_idle_call kernel/sched/idle.c:154 [inline]
  do_idle+0x1af/0x280 kernel/sched/idle.c:263
  cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
  start_secondary+0x208/0x260 arch/x86/kernel/smpboot.c:264
  secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241

read to 0xffff8880a69a0170 of 8 bytes by interrupt on cpu 0:
  sk_unhashed include/net/sock.h:607 [inline]
  inet_unhash+0x3d/0x4a0 net/ipv4/inet_hashtables.c:592
  tcp_set_state+0xfa/0x3e0 net/ipv4/tcp.c:2249
  tcp_done+0x93/0x1e0 net/ipv4/tcp.c:3854
  tcp_write_err+0x7e/0xc0 net/ipv4/tcp_timer.c:56
  tcp_retransmit_timer+0x9b8/0x16d0 net/ipv4/tcp_timer.c:479	<===
  tcp_write_timer_handler+0x42d/0x510 net/ipv4/tcp_timer.c:599
  tcp_write_timer+0xd1/0xf0 net/ipv4/tcp_timer.c:619
  call_timer_fn+0x5f/0x2f0 kernel/time/timer.c:1404
  expire_timers kernel/time/timer.c:1449 [inline]
  __run_timers kernel/time/timer.c:1773 [inline]
  __run_timers kernel/time/timer.c:1740 [inline]
  run_timer_softirq+0xc0c/0xcd0 kernel/time/timer.c:1786
  __do_softirq+0x115/0x33f kernel/softirq.c:292
  invoke_softirq kernel/softirq.c:373 [inline]
  irq_exit+0xbb/0xe0 kernel/softirq.c:413
  exiting_irq arch/x86/include/asm/apic.h:536 [inline]
  smp_apic_timer_interrupt+0xe6/0x280 arch/x86/kernel/apic/apic.c:1137
  apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:830
  native_safe_halt+0xe/0x10 arch/x86/kernel/paravirt.c:71
  arch_cpu_idle+0x1f/0x30 arch/x86/kernel/process.c:571
  default_idle_call+0x1e/0x40 kernel/sched/idle.c:94
  cpuidle_idle_call kernel/sched/idle.c:154 [inline]
  do_idle+0x1af/0x280 kernel/sched/idle.c:263
  cpu_startup_entry+0x1b/0x20 kernel/sched/idle.c:355
  rest_init+0xec/0xf6 init/main.c:452
  arch_call_rest_init+0x17/0x37
  start_kernel+0x838/0x85e init/main.c:786
  x86_64_start_reservations+0x29/0x2b arch/x86/kernel/head64.c:490
  x86_64_start_kernel+0x72/0x76 arch/x86/kernel/head64.c:471
  secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.0-rc6+ #0
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011

------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-14  8:40                 ` luoxuanqiang
@ 2025-10-14 10:02                   ` Eric Dumazet
  2025-10-14 11:40                     ` luoxuanqiang
  0 siblings, 1 reply; 24+ messages in thread
From: Eric Dumazet @ 2025-10-14 10:02 UTC (permalink / raw)
  To: luoxuanqiang
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay

On Tue, Oct 14, 2025 at 1:41 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>
>
> 在 2025/10/14 16:09, Eric Dumazet 写道:
> > On Tue, Oct 14, 2025 at 1:05 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
> >>
> >> 在 2025/10/14 15:34, Eric Dumazet 写道:
> >>> On Tue, Oct 14, 2025 at 12:21 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
> >>>> 在 2025/10/13 17:49, Eric Dumazet 写道:
> >>>>> On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
> >>>>>> 在 2025/10/13 15:31, Eric Dumazet 写道:
> >>>>>>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
> >>>>>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>>>>>>>
> >>>>>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
> >>>>>>>>
> >>>>>>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
> >>>>>>>> mentioned in the patch below:
> >>>>>>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
> >>>>>>>> rculist_nulls")
> >>>>>>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >>>>>>>> hlist_nulls")
> >>>>>>>>
> >>>>>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
> >>>>>>>> ---
> >>>>>>>>      include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
> >>>>>>>>      1 file changed, 59 insertions(+)
> >>>>>>>>
> >>>>>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
> >>>>>>>> index 89186c499dd4..c26cb83ca071 100644
> >>>>>>>> --- a/include/linux/rculist_nulls.h
> >>>>>>>> +++ b/include/linux/rculist_nulls.h
> >>>>>>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
> >>>>>>>>      #define hlist_nulls_next_rcu(node) \
> >>>>>>>>             (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
> >>>>>>>>
> >>>>>>>> +/**
> >>>>>>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
> >>>>>>>> + * @node: element of the list.
> >>>>>>>> + */
> >>>>>>>> +#define hlist_nulls_pprev_rcu(node) \
> >>>>>>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
> >>>>>>>> +
> >>>>>>>>      /**
> >>>>>>>>       * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
> >>>>>>>>       * @n: the element to delete from the hash list.
> >>>>>>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
> >>>>>>>>             n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
> >>>>>>>>      }
> >>>>>>>>
> >>>>>>>> +/**
> >>>>>>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
> >>>>>>>> + * @old: the element to be replaced
> >>>>>>>> + * @new: the new element to insert
> >>>>>>>> + *
> >>>>>>>> + * Description:
> >>>>>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
> >>>>>>>> + * permitting racing traversals.
> >>>>>>>> + *
> >>>>>>>> + * The caller must take whatever precautions are necessary (such as holding
> >>>>>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
> >>>>>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
> >>>>>>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
> >>>>>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
> >>>>>>>> + */
> >>>>>>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
> >>>>>>>> +                                          struct hlist_nulls_node *new)
> >>>>>>>> +{
> >>>>>>>> +       struct hlist_nulls_node *next = old->next;
> >>>>>>>> +
> >>>>>>>> +       WRITE_ONCE(new->next, next);
> >>>>>>>> +       WRITE_ONCE(new->pprev, old->pprev);
> >>>>>>> I do not think these two WRITE_ONCE() are needed.
> >>>>>>>
> >>>>>>> At this point new is not yet visible.
> >>>>>>>
> >>>>>>> The following  rcu_assign_pointer() is enough to make sure prior
> >>>>>>> writes are committed to memory.
> >>>>>> Dear Eric,
> >>>>>>
> >>>>>> I’m quoting your more detailed explanation from the other patch [0], thank
> >>>>>> you for that!
> >>>>>>
> >>>>>> However, regarding new->next, if the new object is allocated with
> >>>>>> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
> >>>>>> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
> >>>>>> rculist_nulls”)?
> >>>>>>
> >>>>>> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
> >>>>>> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
> >>>>>> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
> >>>>> I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
> >>>>>
> >>>>> It is a bit weird to annotate the writes, but not the lockless reads,
> >>>>> even if apparently KCSAN
> >>>>> is okay with that.
> >>>>>
> >>>> Dear Eric,
> >>>>
> >>>> I’m sorry—I still haven’t fully grasped the scenario you mentioned where
> >>>> sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
> >>>> similar to the race described in commit 860c8802ace1 (“rcu: Use
> >>>> WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].
> >>>>
> >>> inet_unhash() does a lockless sk_unhash(sk) call, while no lock is
> >>> held in some cases (look at tcp_done())
> >>>
> >>> void inet_unhash(struct sock *sk)
> >>> {
> >>> struct inet_hashinfo *hashinfo = tcp_get_hashinfo(sk);
> >>>
> >>> if (sk_unhashed(sk))    // Here no lock is held
> >>>       return;
> >>>
> >>> Relevant lock (depending on (sk->sk_state == TCP_LISTEN)) is acquired
> >>> a few lines later.
> >>>
> >>> Then
> >>>
> >>> __sk_nulls_del_node_init_rcu() is called safely, while the bucket lock is held.
> >>>
> >> Dear Eric,
> >>
> >> Thanks for the quick response!
> >>
> >> In the call path:
> >>           tcp_retransmit_timer()
> >>                   tcp_write_err()
> >>                           tcp_done()
> >>
> >> tcp_retransmit_timer() already calls lockdep_sock_is_held(sk) to check the
> >> socket‑lock state.
> >>
> >> void tcp_retransmit_timer(struct sock *sk)
> >> {
> >>           struct tcp_sock *tp = tcp_sk(sk);
> >>           struct net *net = sock_net(sk);
> >>           struct inet_connection_sock *icsk = inet_csk(sk);
> >>           struct request_sock *req;
> >>           struct sk_buff *skb;
> >>
> >>           req = rcu_dereference_protected(tp->fastopen_rsk,
> >>                                    lockdep_sock_is_held(sk)); // Check here
> >>
> >> Does that mean we’re already protected by lock_sock(sk) or
> >> bh_lock_sock(sk)?
> > But the socket lock is not protecting ehash buckets. These are other locks.
> >
> > Also, inet_unhash() can be called from other paths, without a socket
> > lock being held.
>
> Dear Eric,
>
> I understand the distinction now, but looking at the call stack in [0],
> both CPUs reach inet_unhash() via the tcp_retransmit_timer() path, so only
> one of them should pass the check, right?
>
> I’m still not clear how this race condition arises.

Because that is two different sockets. This once again explains why
holding or not the socket lock is not relevant.

One of them is changing pointers in the chain, messing with
surrounding pointers.

The second one is reading sk->sk_node.pprev without using
hlist_unhashed_lockless().

I do not know how to explain this...

Please look at the difference between hlist_unhashed_lockless() and
hlist_unhashed().

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu()
  2025-10-14 10:02                   ` Eric Dumazet
@ 2025-10-14 11:40                     ` luoxuanqiang
  0 siblings, 0 replies; 24+ messages in thread
From: luoxuanqiang @ 2025-10-14 11:40 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: kuniyu, Paul E. McKenney, kerneljasonxing, davem, kuba, netdev,
	Xuanqiang Luo, Frederic Weisbecker, Neeraj Upadhyay


在 2025/10/14 18:02, Eric Dumazet 写道:
> On Tue, Oct 14, 2025 at 1:41 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>
>> 在 2025/10/14 16:09, Eric Dumazet 写道:
>>> On Tue, Oct 14, 2025 at 1:05 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>>> 在 2025/10/14 15:34, Eric Dumazet 写道:
>>>>> On Tue, Oct 14, 2025 at 12:21 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>>>>> 在 2025/10/13 17:49, Eric Dumazet 写道:
>>>>>>> On Mon, Oct 13, 2025 at 1:26 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote:
>>>>>>>> 在 2025/10/13 15:31, Eric Dumazet 写道:
>>>>>>>>> On Fri, Sep 26, 2025 at 12:41 AM <xuanqiang.luo@linux.dev> wrote:
>>>>>>>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>>>>>>
>>>>>>>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries.
>>>>>>>>>>
>>>>>>>>>> Keep using WRITE_ONCE() to assign values to ->next and ->pprev, as
>>>>>>>>>> mentioned in the patch below:
>>>>>>>>>> commit efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for
>>>>>>>>>> rculist_nulls")
>>>>>>>>>> commit 860c8802ace1 ("rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>>>>>>>> hlist_nulls")
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
>>>>>>>>>> ---
>>>>>>>>>>       include/linux/rculist_nulls.h | 59 +++++++++++++++++++++++++++++++++++
>>>>>>>>>>       1 file changed, 59 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h
>>>>>>>>>> index 89186c499dd4..c26cb83ca071 100644
>>>>>>>>>> --- a/include/linux/rculist_nulls.h
>>>>>>>>>> +++ b/include/linux/rculist_nulls.h
>>>>>>>>>> @@ -52,6 +52,13 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
>>>>>>>>>>       #define hlist_nulls_next_rcu(node) \
>>>>>>>>>>              (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
>>>>>>>>>>
>>>>>>>>>> +/**
>>>>>>>>>> + * hlist_nulls_pprev_rcu - returns the dereferenced pprev of @node.
>>>>>>>>>> + * @node: element of the list.
>>>>>>>>>> + */
>>>>>>>>>> +#define hlist_nulls_pprev_rcu(node) \
>>>>>>>>>> +       (*((struct hlist_nulls_node __rcu __force **)(node)->pprev))
>>>>>>>>>> +
>>>>>>>>>>       /**
>>>>>>>>>>        * hlist_nulls_del_rcu - deletes entry from hash list without re-initialization
>>>>>>>>>>        * @n: the element to delete from the hash list.
>>>>>>>>>> @@ -152,6 +159,58 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
>>>>>>>>>>              n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL);
>>>>>>>>>>       }
>>>>>>>>>>
>>>>>>>>>> +/**
>>>>>>>>>> + * hlist_nulls_replace_rcu - replace an old entry by a new one
>>>>>>>>>> + * @old: the element to be replaced
>>>>>>>>>> + * @new: the new element to insert
>>>>>>>>>> + *
>>>>>>>>>> + * Description:
>>>>>>>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while
>>>>>>>>>> + * permitting racing traversals.
>>>>>>>>>> + *
>>>>>>>>>> + * The caller must take whatever precautions are necessary (such as holding
>>>>>>>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such
>>>>>>>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same
>>>>>>>>>> + * list.  However, it is perfectly legal to run concurrently with the _rcu
>>>>>>>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu().
>>>>>>>>>> + */
>>>>>>>>>> +static inline void hlist_nulls_replace_rcu(struct hlist_nulls_node *old,
>>>>>>>>>> +                                          struct hlist_nulls_node *new)
>>>>>>>>>> +{
>>>>>>>>>> +       struct hlist_nulls_node *next = old->next;
>>>>>>>>>> +
>>>>>>>>>> +       WRITE_ONCE(new->next, next);
>>>>>>>>>> +       WRITE_ONCE(new->pprev, old->pprev);
>>>>>>>>> I do not think these two WRITE_ONCE() are needed.
>>>>>>>>>
>>>>>>>>> At this point new is not yet visible.
>>>>>>>>>
>>>>>>>>> The following  rcu_assign_pointer() is enough to make sure prior
>>>>>>>>> writes are committed to memory.
>>>>>>>> Dear Eric,
>>>>>>>>
>>>>>>>> I’m quoting your more detailed explanation from the other patch [0], thank
>>>>>>>> you for that!
>>>>>>>>
>>>>>>>> However, regarding new->next, if the new object is allocated with
>>>>>>>> SLAB_TYPESAFE_BY_RCU, would we still encounter the same issue as in commit
>>>>>>>> efd04f8a8b45 (“rcu: Use WRITE_ONCE() for assignments to ->next for
>>>>>>>> rculist_nulls”)?
>>>>>>>>
>>>>>>>> Also, for the WRITE_ONCE() assignments to ->pprev introduced in commit
>>>>>>>> 860c8802ace1 (“rcu: Use WRITE_ONCE() for assignments to ->pprev for
>>>>>>>> hlist_nulls”) within hlist_nulls_add_head_rcu(), is that also unnecessary?
>>>>>>> I forgot sk_unhashed()/sk_hashed() could be called from lockless contexts.
>>>>>>>
>>>>>>> It is a bit weird to annotate the writes, but not the lockless reads,
>>>>>>> even if apparently KCSAN
>>>>>>> is okay with that.
>>>>>>>
>>>>>> Dear Eric,
>>>>>>
>>>>>> I’m sorry—I still haven’t fully grasped the scenario you mentioned where
>>>>>> sk_unhashed()/sk_hashed() can be called from lock‑less contexts. It seems
>>>>>> similar to the race described in commit 860c8802ace1 (“rcu: Use
>>>>>> WRITE_ONCE() for assignments to ->pprev for hlist_nulls”), e.g.: [0].
>>>>>>
>>>>> inet_unhash() does a lockless sk_unhash(sk) call, while no lock is
>>>>> held in some cases (look at tcp_done())
>>>>>
>>>>> void inet_unhash(struct sock *sk)
>>>>> {
>>>>> struct inet_hashinfo *hashinfo = tcp_get_hashinfo(sk);
>>>>>
>>>>> if (sk_unhashed(sk))    // Here no lock is held
>>>>>        return;
>>>>>
>>>>> Relevant lock (depending on (sk->sk_state == TCP_LISTEN)) is acquired
>>>>> a few lines later.
>>>>>
>>>>> Then
>>>>>
>>>>> __sk_nulls_del_node_init_rcu() is called safely, while the bucket lock is held.
>>>>>
>>>> Dear Eric,
>>>>
>>>> Thanks for the quick response!
>>>>
>>>> In the call path:
>>>>            tcp_retransmit_timer()
>>>>                    tcp_write_err()
>>>>                            tcp_done()
>>>>
>>>> tcp_retransmit_timer() already calls lockdep_sock_is_held(sk) to check the
>>>> socket‑lock state.
>>>>
>>>> void tcp_retransmit_timer(struct sock *sk)
>>>> {
>>>>            struct tcp_sock *tp = tcp_sk(sk);
>>>>            struct net *net = sock_net(sk);
>>>>            struct inet_connection_sock *icsk = inet_csk(sk);
>>>>            struct request_sock *req;
>>>>            struct sk_buff *skb;
>>>>
>>>>            req = rcu_dereference_protected(tp->fastopen_rsk,
>>>>                                     lockdep_sock_is_held(sk)); // Check here
>>>>
>>>> Does that mean we’re already protected by lock_sock(sk) or
>>>> bh_lock_sock(sk)?
>>> But the socket lock is not protecting ehash buckets. These are other locks.
>>>
>>> Also, inet_unhash() can be called from other paths, without a socket
>>> lock being held.
>> Dear Eric,
>>
>> I understand the distinction now, but looking at the call stack in [0],
>> both CPUs reach inet_unhash() via the tcp_retransmit_timer() path, so only
>> one of them should pass the check, right?
>>
>> I’m still not clear how this race condition arises.
> Because that is two different sockets. This once again explains why
> holding or not the socket lock is not relevant.
>
> One of them is changing pointers in the chain, messing with
> surrounding pointers.
>
> The second one is reading sk->sk_node.pprev without using
> hlist_unhashed_lockless().
>
> I do not know how to explain this...
>
> Please look at the difference between hlist_unhashed_lockless() and
> hlist_unhashed().

Thank you very much for the patient explanation. :)

I think I understand now!

Best regards!
Xuanqiang.


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2025-10-14 11:41 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-26  7:40 [PATCH net-next v7 0/3] net: Avoid ehash lookup races xuanqiang.luo
2025-09-26  7:40 ` [PATCH net-next v7 1/3] rculist: Add hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
2025-09-27 20:31   ` Kuniyuki Iwashima
2025-09-30  9:16   ` Paolo Abeni
2025-10-01 15:03     ` luoxuanqiang
2025-10-13  5:36     ` Jiayuan Chen
2025-10-13  6:26       ` Jason Xing
2025-10-13  7:04         ` luoxuanqiang
2025-10-13 12:08           ` Simon Horman
2025-10-14  2:29             ` luoxuanqiang
2025-10-01 12:19   ` Frederic Weisbecker
2025-10-13  7:31   ` Eric Dumazet
2025-10-13  8:25     ` luoxuanqiang
2025-10-13  9:49       ` Eric Dumazet
2025-10-14  7:20         ` luoxuanqiang
2025-10-14  7:34           ` Eric Dumazet
2025-10-14  8:04             ` luoxuanqiang
2025-10-14  8:09               ` Eric Dumazet
2025-10-14  8:40                 ` luoxuanqiang
2025-10-14 10:02                   ` Eric Dumazet
2025-10-14 11:40                     ` luoxuanqiang
2025-09-26  7:40 ` [PATCH net-next v7 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo
2025-09-26  7:40 ` [PATCH net-next v7 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo
2025-09-27  2:56 ` [PATCH net-next v7 0/3] net: Avoid ehash lookup races Jiayuan Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).