* [PATCH net-next v3 0/3] net: Avoid ehash lookup races
@ 2025-09-16 10:30 xuanqiang.luo
2025-09-16 10:30 ` [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: xuanqiang.luo @ 2025-09-16 10:30 UTC (permalink / raw)
To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo
From: Xuanqiang Luo <luoxuanqiang@kylinos.cn>
After replacing R/W locks with RCU in commit 3ab5aee7fe84 ("net: Convert
TCP & DCCP hash tables to use RCU / hlist_nulls"), a race window emerged
during the switch from reqsk/sk to sk/tw.
Now that both timewait sock (tw) and full sock (sk) reside on the same
ehash chain, it is appropriate to introduce hlist_nulls replace
operations, to eliminate the race conditions caused by this window.
Before this series of patches, I previously sent another version of the
patch, attempting to avoid the issue using a lock mechanism. However, it
seems there are some problems with that approach now, so I've switched to
the "replace" method in the current patches to resolve the issue.
For details, refer to:
https://lore.kernel.org/netdev/20250903024406.2418362-1-xuanqiang.luo@linux.dev/
Before I encountered this type of issue recently, I found there had been
several historical discussions about it. Therefore, I'm adding this
background information for those interested to reference:
1. https://lore.kernel.org/lkml/20230118015941.1313-1-kerneljasonxing@gmail.com/
2. https://lore.kernel.org/netdev/20230606064306.9192-1-duanmuquan@baidu.com/
---
Changes:
v3:
* Add more background information on this type of issue to the letter cover.
v2: https://lore.kernel.org/all/20250916064614.605075-1-xuanqiang.luo@linux.dev/
* Patch 1
* Use WRITE_ONCE() to initialize old->pprev.
* Patch 2&3
* Optimize sk hashed check. Thanks Kuni for pointing it out!
v1: https://lore.kernel.org/all/20250915070308.111816-1-xuanqiang.luo@linux.dev/
Xuanqiang Luo (3):
rculist: Add __hlist_nulls_replace_rcu() and
hlist_nulls_replace_init_rcu()
inet: Avoid ehash lookup race in inet_ehash_insert()
inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule()
include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++
include/net/sock.h | 23 +++++++++++++
net/ipv4/inet_hashtables.c | 4 ++-
net/ipv4/inet_timewait_sock.c | 15 ++++-----
4 files changed, 93 insertions(+), 10 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-16 10:30 [PATCH net-next v3 0/3] net: Avoid ehash lookup races xuanqiang.luo @ 2025-09-16 10:30 ` xuanqiang.luo 2025-09-16 18:58 ` Kuniyuki Iwashima 2025-09-16 10:30 ` [PATCH net-next v3 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo 2 siblings, 1 reply; 15+ messages in thread From: xuanqiang.luo @ 2025-09-16 10:30 UTC (permalink / raw) To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> Add two functions to atomically replace RCU-protected hlist_nulls entries. Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> --- include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h index 89186c499dd4..8ed604f65a3e 100644 --- a/include/linux/rculist_nulls.h +++ b/include/linux/rculist_nulls.h @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); } +/** + * __hlist_nulls_replace_rcu - replace an old entry by a new one + * @old: the element to be replaced + * @new: the new element to insert + * + * Description: + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while + * permitting racing traversals. + * + * The caller must take whatever precautions are necessary (such as holding + * appropriate locks) to avoid racing with another list-mutation primitive, such + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same + * list. However, it is perfectly legal to run concurrently with the _rcu + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). + */ +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, + struct hlist_nulls_node *new) +{ + struct hlist_nulls_node *next = old->next; + + new->next = next; + WRITE_ONCE(new->pprev, old->pprev); + rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new); + if (!is_a_nulls(next)) + WRITE_ONCE(new->next->pprev, &new->next); +} + +/** + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and + * initialize the old + * @old: the element to be replaced + * @new: the new element to insert + * + * Description: + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while + * permitting racing traversals, and reinitialize the old entry. + * + * Return: true if the old entry was hashed and was replaced successfully, false + * otherwise. + * + * Note: hlist_nulls_unhashed() on the old node returns true after this. + * It is useful for RCU based read lockfree traversal if the writer side must + * know if the list entry is still hashed or already unhashed. + * + * The caller must take whatever precautions are necessary (such as holding + * appropriate locks) to avoid racing with another list-mutation primitive, such + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same + * list. However, it is perfectly legal to run concurrently with the _rcu + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). + */ +static inline bool hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old, + struct hlist_nulls_node *new) +{ + if (!hlist_nulls_unhashed(old)) { + __hlist_nulls_replace_rcu(old, new); + WRITE_ONCE(old->pprev, NULL); + return true; + } + return false; +} + /** * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type * @tpos: the type * to use as a loop cursor. -- 2.25.1 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-16 10:30 ` [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo @ 2025-09-16 18:58 ` Kuniyuki Iwashima 2025-09-17 3:26 ` luoxuanqiang 0 siblings, 1 reply; 15+ messages in thread From: Kuniyuki Iwashima @ 2025-09-16 18:58 UTC (permalink / raw) To: xuanqiang.luo Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: > > From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > > Add two functions to atomically replace RCU-protected hlist_nulls entries. > > Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > --- > include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ > 1 file changed, 61 insertions(+) > > diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h > index 89186c499dd4..8ed604f65a3e 100644 > --- a/include/linux/rculist_nulls.h > +++ b/include/linux/rculist_nulls.h > @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) > n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); > } > > +/** > + * __hlist_nulls_replace_rcu - replace an old entry by a new one > + * @old: the element to be replaced > + * @new: the new element to insert > + * > + * Description: > + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while > + * permitting racing traversals. > + * > + * The caller must take whatever precautions are necessary (such as holding > + * appropriate locks) to avoid racing with another list-mutation primitive, such > + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same > + * list. However, it is perfectly legal to run concurrently with the _rcu > + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). > + */ > +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, > + struct hlist_nulls_node *new) > +{ > + struct hlist_nulls_node *next = old->next; > + > + new->next = next; > + WRITE_ONCE(new->pprev, old->pprev); As you don't use WRITE_ONCE() for ->next, the new node must not be published yet, so WRITE_ONCE() is unnecessary for ->pprev too. > + rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new); > + if (!is_a_nulls(next)) > + WRITE_ONCE(new->next->pprev, &new->next); > +} > + > +/** > + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and > + * initialize the old > + * @old: the element to be replaced > + * @new: the new element to insert > + * > + * Description: > + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while > + * permitting racing traversals, and reinitialize the old entry. > + * > + * Return: true if the old entry was hashed and was replaced successfully, false > + * otherwise. > + * > + * Note: hlist_nulls_unhashed() on the old node returns true after this. > + * It is useful for RCU based read lockfree traversal if the writer side must > + * know if the list entry is still hashed or already unhashed. > + * > + * The caller must take whatever precautions are necessary (such as holding > + * appropriate locks) to avoid racing with another list-mutation primitive, such > + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same > + * list. However, it is perfectly legal to run concurrently with the _rcu > + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). > + */ > +static inline bool hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old, > + struct hlist_nulls_node *new) > +{ > + if (!hlist_nulls_unhashed(old)) { As mentioned in v1, this check is redundant. > + __hlist_nulls_replace_rcu(old, new); > + WRITE_ONCE(old->pprev, NULL); > + return true; > + } > + return false; > +} > + > /** > * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type > * @tpos: the type * to use as a loop cursor. > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-16 18:58 ` Kuniyuki Iwashima @ 2025-09-17 3:26 ` luoxuanqiang 2025-09-17 4:27 ` Kuniyuki Iwashima 0 siblings, 1 reply; 15+ messages in thread From: luoxuanqiang @ 2025-09-17 3:26 UTC (permalink / raw) To: Kuniyuki Iwashima Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo 在 2025/9/17 02:58, Kuniyuki Iwashima 写道: > On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >> >> Add two functions to atomically replace RCU-protected hlist_nulls entries. >> >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >> --- >> include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ >> 1 file changed, 61 insertions(+) >> >> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h >> index 89186c499dd4..8ed604f65a3e 100644 >> --- a/include/linux/rculist_nulls.h >> +++ b/include/linux/rculist_nulls.h >> @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) >> n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); >> } >> >> +/** >> + * __hlist_nulls_replace_rcu - replace an old entry by a new one >> + * @old: the element to be replaced >> + * @new: the new element to insert >> + * >> + * Description: >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while >> + * permitting racing traversals. >> + * >> + * The caller must take whatever precautions are necessary (such as holding >> + * appropriate locks) to avoid racing with another list-mutation primitive, such >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same >> + * list. However, it is perfectly legal to run concurrently with the _rcu >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). >> + */ >> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, >> + struct hlist_nulls_node *new) >> +{ >> + struct hlist_nulls_node *next = old->next; >> + >> + new->next = next; Do we need to use WRITE_ONCE() here, as mentioned in efd04f8a8b45 ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")? I am more inclined to think that it is necessary. >> + WRITE_ONCE(new->pprev, old->pprev); > As you don't use WRITE_ONCE() for ->next, the new node must > not be published yet, so WRITE_ONCE() is unnecessary for ->pprev > too. I noticed that point. My understanding is that using WRITE_ONCE() for new->pprev follows the approach in hlist_replace_rcu() to match the READ_ONCE() in hlist_nulls_unhashed_lockless() and hlist_unhashed_lockless(). > >> + rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new); >> + if (!is_a_nulls(next)) >> + WRITE_ONCE(new->next->pprev, &new->next); >> +} >> + >> +/** >> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and >> + * initialize the old >> + * @old: the element to be replaced >> + * @new: the new element to insert >> + * >> + * Description: >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while >> + * permitting racing traversals, and reinitialize the old entry. >> + * >> + * Return: true if the old entry was hashed and was replaced successfully, false >> + * otherwise. >> + * >> + * Note: hlist_nulls_unhashed() on the old node returns true after this. >> + * It is useful for RCU based read lockfree traversal if the writer side must >> + * know if the list entry is still hashed or already unhashed. >> + * >> + * The caller must take whatever precautions are necessary (such as holding >> + * appropriate locks) to avoid racing with another list-mutation primitive, such >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same >> + * list. However, it is perfectly legal to run concurrently with the _rcu >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). >> + */ >> +static inline bool hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old, >> + struct hlist_nulls_node *new) >> +{ >> + if (!hlist_nulls_unhashed(old)) { > As mentioned in v1, this check is redundant. Apologies for bringing this up again. My understanding is that replacing a node requires checking if the old node is unhashed. If so, we need a return value to inform the caller that the replace operation would fail. > >> + __hlist_nulls_replace_rcu(old, new); >> + WRITE_ONCE(old->pprev, NULL); >> + return true; >> + } >> + return false; >> +} >> + >> /** >> * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type >> * @tpos: the type * to use as a loop cursor. >> -- >> 2.25.1 >> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-17 3:26 ` luoxuanqiang @ 2025-09-17 4:27 ` Kuniyuki Iwashima 2025-09-17 4:43 ` Kuniyuki Iwashima 2025-09-18 6:09 ` luoxuanqiang 0 siblings, 2 replies; 15+ messages in thread From: Kuniyuki Iwashima @ 2025-09-17 4:27 UTC (permalink / raw) To: luoxuanqiang Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: > > > 在 2025/9/17 02:58, Kuniyuki Iwashima 写道: > > On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: > >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > >> > >> Add two functions to atomically replace RCU-protected hlist_nulls entries. > >> > >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > >> --- > >> include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ > >> 1 file changed, 61 insertions(+) > >> > >> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h > >> index 89186c499dd4..8ed604f65a3e 100644 > >> --- a/include/linux/rculist_nulls.h > >> +++ b/include/linux/rculist_nulls.h > >> @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) > >> n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); > >> } > >> > >> +/** > >> + * __hlist_nulls_replace_rcu - replace an old entry by a new one > >> + * @old: the element to be replaced > >> + * @new: the new element to insert > >> + * > >> + * Description: > >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while > >> + * permitting racing traversals. > >> + * > >> + * The caller must take whatever precautions are necessary (such as holding > >> + * appropriate locks) to avoid racing with another list-mutation primitive, such > >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same > >> + * list. However, it is perfectly legal to run concurrently with the _rcu > >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). > >> + */ > >> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, > >> + struct hlist_nulls_node *new) > >> +{ > >> + struct hlist_nulls_node *next = old->next; > >> + > >> + new->next = next; > > Do we need to use WRITE_ONCE() here, as mentioned in efd04f8a8b45 > ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")? > I am more inclined to think that it is necessary. Good point, then WRITE_ONCE() makes sense. > > >> + WRITE_ONCE(new->pprev, old->pprev); > > As you don't use WRITE_ONCE() for ->next, the new node must > > not be published yet, so WRITE_ONCE() is unnecessary for ->pprev > > too. > > I noticed that point. My understanding is that using WRITE_ONCE() > for new->pprev follows the approach in hlist_replace_rcu() to > match the READ_ONCE() in hlist_nulls_unhashed_lockless() and > hlist_unhashed_lockless(). Using WRITE_ONCE() or READ_ONCE() implies lockless readers or writers elsewhere. sk_hashed() does not use the lockless version, and I think it's always called under lock_sock() or bh_. Perhaps run kernel w/ KCSAN and see if it complains. [ It seems hlist_nulls_unhashed_lockless is not used at all and hlist_unhashed_lockless() is only used by bpf and timer code. ] That said, it might be fair to use WRITE_ONCE() here to make future users less error-prone. > > > > >> + rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new); > >> + if (!is_a_nulls(next)) > >> + WRITE_ONCE(new->next->pprev, &new->next); > >> +} > >> + > >> +/** > >> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and > >> + * initialize the old > >> + * @old: the element to be replaced > >> + * @new: the new element to insert > >> + * > >> + * Description: > >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while > >> + * permitting racing traversals, and reinitialize the old entry. > >> + * > >> + * Return: true if the old entry was hashed and was replaced successfully, false > >> + * otherwise. > >> + * > >> + * Note: hlist_nulls_unhashed() on the old node returns true after this. > >> + * It is useful for RCU based read lockfree traversal if the writer side must > >> + * know if the list entry is still hashed or already unhashed. > >> + * > >> + * The caller must take whatever precautions are necessary (such as holding > >> + * appropriate locks) to avoid racing with another list-mutation primitive, such > >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same > >> + * list. However, it is perfectly legal to run concurrently with the _rcu > >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). > >> + */ > >> +static inline bool hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old, > >> + struct hlist_nulls_node *new) > >> +{ > >> + if (!hlist_nulls_unhashed(old)) { > > As mentioned in v1, this check is redundant. > > Apologies for bringing this up again. My understanding is that > replacing a node requires checking if the old node is unhashed. Only if the caller does not check it. __sk_nulls_replace_node_init_rcu() has already checked sk_hashed(old), which is !hlist_nulls_unhashed(old), no ? __sk_nulls_replace_node_init_rcu(struct sock *old, ...) if (sk_hashed(old)) hlist_nulls_replace_init_rcu(&old->sk_nulls_node, ...) if (!hlist_nulls_unhashed(old)) > > If so, we need a return value to inform the caller that the > replace operation would fail. > > > > >> + __hlist_nulls_replace_rcu(old, new); > >> + WRITE_ONCE(old->pprev, NULL); > >> + return true; > >> + } > >> + return false; > >> +} > >> + > >> /** > >> * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type > >> * @tpos: the type * to use as a loop cursor. > >> -- > >> 2.25.1 > >> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-17 4:27 ` Kuniyuki Iwashima @ 2025-09-17 4:43 ` Kuniyuki Iwashima 2025-09-18 6:09 ` luoxuanqiang 2025-09-18 6:09 ` luoxuanqiang 1 sibling, 1 reply; 15+ messages in thread From: Kuniyuki Iwashima @ 2025-09-17 4:43 UTC (permalink / raw) To: luoxuanqiang Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo On Tue, Sep 16, 2025 at 9:27 PM Kuniyuki Iwashima <kuniyu@google.com> wrote: > > On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: > > > > > > 在 2025/9/17 02:58, Kuniyuki Iwashima 写道: > > > On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: > > >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > > >> > > >> Add two functions to atomically replace RCU-protected hlist_nulls entries. > > >> > > >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > > >> --- > > >> include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ > > >> 1 file changed, 61 insertions(+) > > >> > > >> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h > > >> index 89186c499dd4..8ed604f65a3e 100644 > > >> --- a/include/linux/rculist_nulls.h > > >> +++ b/include/linux/rculist_nulls.h > > >> @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) > > >> n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); > > >> } > > >> > > >> +/** > > >> + * __hlist_nulls_replace_rcu - replace an old entry by a new one > > >> + * @old: the element to be replaced > > >> + * @new: the new element to insert > > >> + * > > >> + * Description: > > >> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while > > >> + * permitting racing traversals. > > >> + * > > >> + * The caller must take whatever precautions are necessary (such as holding > > >> + * appropriate locks) to avoid racing with another list-mutation primitive, such > > >> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same > > >> + * list. However, it is perfectly legal to run concurrently with the _rcu > > >> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). > > >> + */ > > >> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, > > >> + struct hlist_nulls_node *new) > > >> +{ > > >> + struct hlist_nulls_node *next = old->next; > > >> + > > >> + new->next = next; > > > > Do we need to use WRITE_ONCE() here, as mentioned in efd04f8a8b45 > > ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")? > > I am more inclined to think that it is necessary. > > Good point, then WRITE_ONCE() makes sense. and it would be nice to have such reasoning in the commit message as we were able to learn from efd04f8a8b45. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-17 4:43 ` Kuniyuki Iwashima @ 2025-09-18 6:09 ` luoxuanqiang 0 siblings, 0 replies; 15+ messages in thread From: luoxuanqiang @ 2025-09-18 6:09 UTC (permalink / raw) To: Kuniyuki Iwashima Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo 在 2025/9/17 12:43, Kuniyuki Iwashima 写道: > On Tue, Sep 16, 2025 at 9:27 PM Kuniyuki Iwashima <kuniyu@google.com> wrote: >> On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: >>> >>> 在 2025/9/17 02:58, Kuniyuki Iwashima 写道: >>>> On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: >>>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >>>>> >>>>> Add two functions to atomically replace RCU-protected hlist_nulls entries. >>>>> >>>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >>>>> --- >>>>> include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ >>>>> 1 file changed, 61 insertions(+) >>>>> >>>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h >>>>> index 89186c499dd4..8ed604f65a3e 100644 >>>>> --- a/include/linux/rculist_nulls.h >>>>> +++ b/include/linux/rculist_nulls.h >>>>> @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) >>>>> n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); >>>>> } >>>>> >>>>> +/** >>>>> + * __hlist_nulls_replace_rcu - replace an old entry by a new one >>>>> + * @old: the element to be replaced >>>>> + * @new: the new element to insert >>>>> + * >>>>> + * Description: >>>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while >>>>> + * permitting racing traversals. >>>>> + * >>>>> + * The caller must take whatever precautions are necessary (such as holding >>>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such >>>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same >>>>> + * list. However, it is perfectly legal to run concurrently with the _rcu >>>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). >>>>> + */ >>>>> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, >>>>> + struct hlist_nulls_node *new) >>>>> +{ >>>>> + struct hlist_nulls_node *next = old->next; >>>>> + >>>>> + new->next = next; >>> Do we need to use WRITE_ONCE() here, as mentioned in efd04f8a8b45 >>> ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")? >>> I am more inclined to think that it is necessary. >> Good point, then WRITE_ONCE() makes sense. > and it would be nice to have such reasoning in the commit > message as we were able to learn from efd04f8a8b45. Okay, I'll add an explanation in the next version. Really appreciate you confirming this! ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() 2025-09-17 4:27 ` Kuniyuki Iwashima 2025-09-17 4:43 ` Kuniyuki Iwashima @ 2025-09-18 6:09 ` luoxuanqiang 1 sibling, 0 replies; 15+ messages in thread From: luoxuanqiang @ 2025-09-18 6:09 UTC (permalink / raw) To: Kuniyuki Iwashima Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo 在 2025/9/17 12:27, Kuniyuki Iwashima 写道: > On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: >> >> 在 2025/9/17 02:58, Kuniyuki Iwashima 写道: >>> On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: >>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >>>> >>>> Add two functions to atomically replace RCU-protected hlist_nulls entries. >>>> >>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >>>> --- >>>> include/linux/rculist_nulls.h | 61 +++++++++++++++++++++++++++++++++++ >>>> 1 file changed, 61 insertions(+) >>>> >>>> diff --git a/include/linux/rculist_nulls.h b/include/linux/rculist_nulls.h >>>> index 89186c499dd4..8ed604f65a3e 100644 >>>> --- a/include/linux/rculist_nulls.h >>>> +++ b/include/linux/rculist_nulls.h >>>> @@ -152,6 +152,67 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n) >>>> n->next = (struct hlist_nulls_node *)NULLS_MARKER(NULL); >>>> } >>>> >>>> +/** >>>> + * __hlist_nulls_replace_rcu - replace an old entry by a new one >>>> + * @old: the element to be replaced >>>> + * @new: the new element to insert >>>> + * >>>> + * Description: >>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while >>>> + * permitting racing traversals. >>>> + * >>>> + * The caller must take whatever precautions are necessary (such as holding >>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such >>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same >>>> + * list. However, it is perfectly legal to run concurrently with the _rcu >>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). >>>> + */ >>>> +static inline void __hlist_nulls_replace_rcu(struct hlist_nulls_node *old, >>>> + struct hlist_nulls_node *new) >>>> +{ >>>> + struct hlist_nulls_node *next = old->next; >>>> + >>>> + new->next = next; >> Do we need to use WRITE_ONCE() here, as mentioned in efd04f8a8b45 >> ("rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls")? >> I am more inclined to think that it is necessary. > Good point, then WRITE_ONCE() makes sense. > >>>> + WRITE_ONCE(new->pprev, old->pprev); >>> As you don't use WRITE_ONCE() for ->next, the new node must >>> not be published yet, so WRITE_ONCE() is unnecessary for ->pprev >>> too. >> I noticed that point. My understanding is that using WRITE_ONCE() >> for new->pprev follows the approach in hlist_replace_rcu() to >> match the READ_ONCE() in hlist_nulls_unhashed_lockless() and >> hlist_unhashed_lockless(). > Using WRITE_ONCE() or READ_ONCE() implies lockless readers > or writers elsewhere. > > sk_hashed() does not use the lockless version, and I think it's > always called under lock_sock() or bh_. Perhaps run kernel > w/ KCSAN and see if it complains. > > [ It seems hlist_nulls_unhashed_lockless is not used at all and > hlist_unhashed_lockless() is only used by bpf and timer code. ] > > That said, it might be fair to use WRITE_ONCE() here to make > future users less error-prone. > > >>>> + rcu_assign_pointer(*(struct hlist_nulls_node __rcu **)new->pprev, new); >>>> + if (!is_a_nulls(next)) >>>> + WRITE_ONCE(new->next->pprev, &new->next); >>>> +} >>>> + >>>> +/** >>>> + * hlist_nulls_replace_init_rcu - replace an old entry by a new one and >>>> + * initialize the old >>>> + * @old: the element to be replaced >>>> + * @new: the new element to insert >>>> + * >>>> + * Description: >>>> + * Replace the old entry with the new one in a RCU-protected hlist_nulls, while >>>> + * permitting racing traversals, and reinitialize the old entry. >>>> + * >>>> + * Return: true if the old entry was hashed and was replaced successfully, false >>>> + * otherwise. >>>> + * >>>> + * Note: hlist_nulls_unhashed() on the old node returns true after this. >>>> + * It is useful for RCU based read lockfree traversal if the writer side must >>>> + * know if the list entry is still hashed or already unhashed. >>>> + * >>>> + * The caller must take whatever precautions are necessary (such as holding >>>> + * appropriate locks) to avoid racing with another list-mutation primitive, such >>>> + * as hlist_nulls_add_head_rcu() or hlist_nulls_del_rcu(), running on this same >>>> + * list. However, it is perfectly legal to run concurrently with the _rcu >>>> + * list-traversal primitives, such as hlist_nulls_for_each_entry_rcu(). >>>> + */ >>>> +static inline bool hlist_nulls_replace_init_rcu(struct hlist_nulls_node *old, >>>> + struct hlist_nulls_node *new) >>>> +{ >>>> + if (!hlist_nulls_unhashed(old)) { >>> As mentioned in v1, this check is redundant. >> Apologies for bringing this up again. My understanding is that >> replacing a node requires checking if the old node is unhashed. > Only if the caller does not check it. > > __sk_nulls_replace_node_init_rcu() has already checked > sk_hashed(old), which is !hlist_nulls_unhashed(old), no ? > > __sk_nulls_replace_node_init_rcu(struct sock *old, ...) > if (sk_hashed(old)) > hlist_nulls_replace_init_rcu(&old->sk_nulls_node, ...) > if (!hlist_nulls_unhashed(old)) > I understand that sk_hashed(old) is equivalent to !hlist_nulls_unhashed(old). However, hlist_nulls_replace_init_rcu() is also used in inet_twsk_hashdance_schedule(). If it's confirmed that the unhashed check is unnecessary in inet_twsk_hashdance_schedule() (as discussed in https://lore.kernel.org/all/CAAVpQUBY=h3gDfaX=J9vbSuhYTn8cfCsBGhPLqoer0OSYdihDg@mail.gmail.com/), then for this specific patchset, this redundant check can indeed be removed. But I'm concerned that others might later use hlist_nulls_replace_init_rcu() standalone, similar to how hlist_nulls_del_init_rcu() is used. This could cause confusion since replace might not always succeed. Given this, might retaining the hlist_nulls_unhashed(old) check be safer? Really appreciate your patient review and suggestions! Thanks Xuanqiang. >> If so, we need a return value to inform the caller that the >> replace operation would fail. >> >>>> + __hlist_nulls_replace_rcu(old, new); >>>> + WRITE_ONCE(old->pprev, NULL); >>>> + return true; >>>> + } >>>> + return false; >>>> +} >>>> + >>>> /** >>>> * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type >>>> * @tpos: the type * to use as a loop cursor. >>>> -- >>>> 2.25.1 >>>> ^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next v3 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() 2025-09-16 10:30 [PATCH net-next v3 0/3] net: Avoid ehash lookup races xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo @ 2025-09-16 10:30 ` xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo 2 siblings, 0 replies; 15+ messages in thread From: xuanqiang.luo @ 2025-09-16 10:30 UTC (permalink / raw) To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> Since ehash lookups are lockless, if one CPU performs a lookup while another concurrently deletes and inserts (removing reqsk and inserting sk), the lookup may fail to find the socket, an RST may be sent. The call trace map is drawn as follows: CPU 0 CPU 1 ----- ----- inet_ehash_insert() spin_lock() sk_nulls_del_node_init_rcu(osk) __inet_lookup_established() (lookup failed) __sk_nulls_add_node_rcu(sk, list) spin_unlock() As both deletion and insertion operate on the same ehash chain, this patch introduces two new sk_nulls_replace_* helper functions to implement atomic replacement. Fixes: 5e0724d027f0 ("tcp/dccp: fix hashdance race for passive sessions") Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> --- include/net/sock.h | 23 +++++++++++++++++++++++ net/ipv4/inet_hashtables.c | 4 +++- 2 files changed, 26 insertions(+), 1 deletion(-) diff --git a/include/net/sock.h b/include/net/sock.h index 0fd465935334..e709376eaf0a 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -854,6 +854,29 @@ static inline bool sk_nulls_del_node_init_rcu(struct sock *sk) return rc; } +static inline bool __sk_nulls_replace_node_init_rcu(struct sock *old, + struct sock *new) +{ + if (sk_hashed(old)) { + hlist_nulls_replace_init_rcu(&old->sk_nulls_node, + &new->sk_nulls_node); + return true; + } + return false; +} + +static inline bool sk_nulls_replace_node_init_rcu(struct sock *old, + struct sock *new) +{ + bool rc = __sk_nulls_replace_node_init_rcu(old, new); + + if (rc) { + WARN_ON(refcount_read(&old->sk_refcnt) == 1); + __sock_put(old); + } + return rc; +} + static inline void __sk_add_node(struct sock *sk, struct hlist_head *list) { hlist_add_head(&sk->sk_node, list); diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index ef4ccfd46ff6..83c9ec625419 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -685,7 +685,8 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) spin_lock(lock); if (osk) { WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); - ret = sk_nulls_del_node_init_rcu(osk); + ret = sk_nulls_replace_node_init_rcu(osk, sk); + goto unlock; } else if (found_dup_sk) { *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); if (*found_dup_sk) @@ -695,6 +696,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) if (ret) __sk_nulls_add_node_rcu(sk, list); +unlock: spin_unlock(lock); return ret; -- 2.25.1 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() 2025-09-16 10:30 [PATCH net-next v3 0/3] net: Avoid ehash lookup races xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo @ 2025-09-16 10:30 ` xuanqiang.luo 2025-09-16 19:48 ` Kuniyuki Iwashima 2 siblings, 1 reply; 15+ messages in thread From: xuanqiang.luo @ 2025-09-16 10:30 UTC (permalink / raw) To: edumazet, kuniyu; +Cc: kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> Since ehash lookups are lockless, if another CPU is converting sk to tw concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause lookup failure. The call trace map is drawn as follows: CPU 0 CPU 1 ----- ----- inet_twsk_hashdance_schedule() spin_lock() inet_twsk_add_node_rcu(tw, ...) __inet_lookup_established() (find tw, failure due to tw_refcnt = 0) __sk_nulls_del_node_init_rcu(sk) refcount_set(&tw->tw_refcnt, 3) spin_unlock() By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after setting tw_refcnt, we ensure that tw is either fully initialized or not visible to other CPUs, eliminating the race. Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls") Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> --- net/ipv4/inet_timewait_sock.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c index 5b5426b8ee92..1ba20c4cb73b 100644 --- a/net/ipv4/inet_timewait_sock.c +++ b/net/ipv4/inet_timewait_sock.c @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); struct inet_bind_hashbucket *bhead, *bhead2; - /* Step 1: Put TW into bind hash. Original socket stays there too. + /* Put TW into bind hash. Original socket stays there too. Note, that any socket with inet->num != 0 MUST be bound in binding cache, even if it is closed. */ @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, spin_lock(lock); - /* Step 2: Hash TW into tcp ehash chain */ - inet_twsk_add_node_rcu(tw, &ehead->chain); - - /* Step 3: Remove SK from hash chain */ - if (__sk_nulls_del_node_init_rcu(sk)) - sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); - - /* Ensure above writes are committed into memory before updating the * refcount. * Provides ordering vs later refcount_inc(). @@ -162,6 +154,11 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, */ refcount_set(&tw->tw_refcnt, 3); + if (hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node)) + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); + else + inet_twsk_add_node_rcu(tw, &ehead->chain); + inet_twsk_schedule(tw, timeo); spin_unlock(lock); -- 2.25.1 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() 2025-09-16 10:30 ` [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo @ 2025-09-16 19:48 ` Kuniyuki Iwashima 2025-09-17 3:26 ` luoxuanqiang 0 siblings, 1 reply; 15+ messages in thread From: Kuniyuki Iwashima @ 2025-09-16 19:48 UTC (permalink / raw) To: xuanqiang.luo Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: > > From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > > Since ehash lookups are lockless, if another CPU is converting sk to tw > concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause > lookup failure. > > The call trace map is drawn as follows: > CPU 0 CPU 1 > ----- ----- > inet_twsk_hashdance_schedule() > spin_lock() > inet_twsk_add_node_rcu(tw, ...) > __inet_lookup_established() > (find tw, failure due to tw_refcnt = 0) > __sk_nulls_del_node_init_rcu(sk) > refcount_set(&tw->tw_refcnt, 3) > spin_unlock() > > By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after > setting tw_refcnt, we ensure that tw is either fully initialized or not > visible to other CPUs, eliminating the race. > > Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls") > Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > --- > net/ipv4/inet_timewait_sock.c | 15 ++++++--------- > 1 file changed, 6 insertions(+), 9 deletions(-) > > diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > index 5b5426b8ee92..1ba20c4cb73b 100644 > --- a/net/ipv4/inet_timewait_sock.c > +++ b/net/ipv4/inet_timewait_sock.c > @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); > struct inet_bind_hashbucket *bhead, *bhead2; > > - /* Step 1: Put TW into bind hash. Original socket stays there too. > + /* Put TW into bind hash. Original socket stays there too. > Note, that any socket with inet->num != 0 MUST be bound in > binding cache, even if it is closed. > */ > @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > > spin_lock(lock); > > - /* Step 2: Hash TW into tcp ehash chain */ > - inet_twsk_add_node_rcu(tw, &ehead->chain); > - > - /* Step 3: Remove SK from hash chain */ > - if (__sk_nulls_del_node_init_rcu(sk)) > - sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); > - > - > /* Ensure above writes are committed into memory before updating the > * refcount. > * Provides ordering vs later refcount_inc(). > @@ -162,6 +154,11 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > */ > refcount_set(&tw->tw_refcnt, 3); > > + if (hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node)) > + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); > + else > + inet_twsk_add_node_rcu(tw, &ehead->chain); When hlist_nulls_replace_init_rcu() returns false ? > + > inet_twsk_schedule(tw, timeo); > > spin_unlock(lock); > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() 2025-09-16 19:48 ` Kuniyuki Iwashima @ 2025-09-17 3:26 ` luoxuanqiang 2025-09-17 4:36 ` Kuniyuki Iwashima 0 siblings, 1 reply; 15+ messages in thread From: luoxuanqiang @ 2025-09-17 3:26 UTC (permalink / raw) To: Kuniyuki Iwashima Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo 在 2025/9/17 03:48, Kuniyuki Iwashima 写道: > On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >> >> Since ehash lookups are lockless, if another CPU is converting sk to tw >> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause >> lookup failure. >> >> The call trace map is drawn as follows: >> CPU 0 CPU 1 >> ----- ----- >> inet_twsk_hashdance_schedule() >> spin_lock() >> inet_twsk_add_node_rcu(tw, ...) >> __inet_lookup_established() >> (find tw, failure due to tw_refcnt = 0) >> __sk_nulls_del_node_init_rcu(sk) >> refcount_set(&tw->tw_refcnt, 3) >> spin_unlock() >> >> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after >> setting tw_refcnt, we ensure that tw is either fully initialized or not >> visible to other CPUs, eliminating the race. >> >> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls") >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >> --- >> net/ipv4/inet_timewait_sock.c | 15 ++++++--------- >> 1 file changed, 6 insertions(+), 9 deletions(-) >> >> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c >> index 5b5426b8ee92..1ba20c4cb73b 100644 >> --- a/net/ipv4/inet_timewait_sock.c >> +++ b/net/ipv4/inet_timewait_sock.c >> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, >> spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); >> struct inet_bind_hashbucket *bhead, *bhead2; >> >> - /* Step 1: Put TW into bind hash. Original socket stays there too. >> + /* Put TW into bind hash. Original socket stays there too. >> Note, that any socket with inet->num != 0 MUST be bound in >> binding cache, even if it is closed. >> */ >> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, >> >> spin_lock(lock); >> >> - /* Step 2: Hash TW into tcp ehash chain */ >> - inet_twsk_add_node_rcu(tw, &ehead->chain); >> - >> - /* Step 3: Remove SK from hash chain */ >> - if (__sk_nulls_del_node_init_rcu(sk)) >> - sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); >> - >> - >> /* Ensure above writes are committed into memory before updating the >> * refcount. >> * Provides ordering vs later refcount_inc(). >> @@ -162,6 +154,11 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, >> */ >> refcount_set(&tw->tw_refcnt, 3); >> >> + if (hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node)) >> + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); >> + else >> + inet_twsk_add_node_rcu(tw, &ehead->chain); > When hlist_nulls_replace_init_rcu() returns false ? When hlist_nulls_replace_init_rcu() returns false, it means sk is unhashed, the replacement operation failed, we need to insert tw, and this doesn't change the original logic. > >> + >> inet_twsk_schedule(tw, timeo); >> >> spin_unlock(lock); >> -- >> 2.25.1 >> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() 2025-09-17 3:26 ` luoxuanqiang @ 2025-09-17 4:36 ` Kuniyuki Iwashima 2025-09-18 8:32 ` luoxuanqiang 0 siblings, 1 reply; 15+ messages in thread From: Kuniyuki Iwashima @ 2025-09-17 4:36 UTC (permalink / raw) To: luoxuanqiang Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: > > > 在 2025/9/17 03:48, Kuniyuki Iwashima 写道: > > On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: > >> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > >> > >> Since ehash lookups are lockless, if another CPU is converting sk to tw > >> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause > >> lookup failure. > >> > >> The call trace map is drawn as follows: > >> CPU 0 CPU 1 > >> ----- ----- > >> inet_twsk_hashdance_schedule() > >> spin_lock() > >> inet_twsk_add_node_rcu(tw, ...) > >> __inet_lookup_established() > >> (find tw, failure due to tw_refcnt = 0) > >> __sk_nulls_del_node_init_rcu(sk) > >> refcount_set(&tw->tw_refcnt, 3) > >> spin_unlock() > >> > >> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after > >> setting tw_refcnt, we ensure that tw is either fully initialized or not > >> visible to other CPUs, eliminating the race. > >> > >> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls") > >> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > >> --- > >> net/ipv4/inet_timewait_sock.c | 15 ++++++--------- > >> 1 file changed, 6 insertions(+), 9 deletions(-) > >> > >> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > >> index 5b5426b8ee92..1ba20c4cb73b 100644 > >> --- a/net/ipv4/inet_timewait_sock.c > >> +++ b/net/ipv4/inet_timewait_sock.c > >> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > >> spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); > >> struct inet_bind_hashbucket *bhead, *bhead2; > >> > >> - /* Step 1: Put TW into bind hash. Original socket stays there too. > >> + /* Put TW into bind hash. Original socket stays there too. > >> Note, that any socket with inet->num != 0 MUST be bound in > >> binding cache, even if it is closed. > >> */ > >> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > >> > >> spin_lock(lock); > >> > >> - /* Step 2: Hash TW into tcp ehash chain */ > >> - inet_twsk_add_node_rcu(tw, &ehead->chain); > >> - > >> - /* Step 3: Remove SK from hash chain */ > >> - if (__sk_nulls_del_node_init_rcu(sk)) > >> - sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); > >> - > >> - > >> /* Ensure above writes are committed into memory before updating the > >> * refcount. > >> * Provides ordering vs later refcount_inc(). > >> @@ -162,6 +154,11 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > >> */ > >> refcount_set(&tw->tw_refcnt, 3); > >> > >> + if (hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node)) > >> + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); > >> + else > >> + inet_twsk_add_node_rcu(tw, &ehead->chain); > > When hlist_nulls_replace_init_rcu() returns false ? > > When hlist_nulls_replace_init_rcu() returns false, it means > sk is unhashed, and how does this happen ? Here is under lock_sock() I think, for example, you can find a lockdep annotation in the path: tcp_time_wait_init tp->af_specific->md5_lookup / tcp_v4_md5_lookup tcp_md5_do_lookup __tcp_md5_do_lookup rcu_dereference_check(tp->md5sig_info, lockdep_sock_is_held(sk)); So, is there a path that unhashes socket without holding lock_sock() ? > the replacement operation failed, we need > to insert tw, and this doesn't change the original logic. > > > > >> + > >> inet_twsk_schedule(tw, timeo); > >> > >> spin_unlock(lock); > >> -- > >> 2.25.1 > >> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() 2025-09-17 4:36 ` Kuniyuki Iwashima @ 2025-09-18 8:32 ` luoxuanqiang 2025-09-19 8:38 ` Kuniyuki Iwashima 0 siblings, 1 reply; 15+ messages in thread From: luoxuanqiang @ 2025-09-18 8:32 UTC (permalink / raw) To: Kuniyuki Iwashima Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo 在 2025/9/17 12:36, Kuniyuki Iwashima 写道: > On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: >> >> 在 2025/9/17 03:48, Kuniyuki Iwashima 写道: >>> On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: >>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >>>> >>>> Since ehash lookups are lockless, if another CPU is converting sk to tw >>>> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause >>>> lookup failure. >>>> >>>> The call trace map is drawn as follows: >>>> CPU 0 CPU 1 >>>> ----- ----- >>>> inet_twsk_hashdance_schedule() >>>> spin_lock() >>>> inet_twsk_add_node_rcu(tw, ...) >>>> __inet_lookup_established() >>>> (find tw, failure due to tw_refcnt = 0) >>>> __sk_nulls_del_node_init_rcu(sk) >>>> refcount_set(&tw->tw_refcnt, 3) >>>> spin_unlock() >>>> >>>> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after >>>> setting tw_refcnt, we ensure that tw is either fully initialized or not >>>> visible to other CPUs, eliminating the race. >>>> >>>> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls") >>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> >>>> --- >>>> net/ipv4/inet_timewait_sock.c | 15 ++++++--------- >>>> 1 file changed, 6 insertions(+), 9 deletions(-) >>>> >>>> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c >>>> index 5b5426b8ee92..1ba20c4cb73b 100644 >>>> --- a/net/ipv4/inet_timewait_sock.c >>>> +++ b/net/ipv4/inet_timewait_sock.c >>>> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, >>>> spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); >>>> struct inet_bind_hashbucket *bhead, *bhead2; >>>> >>>> - /* Step 1: Put TW into bind hash. Original socket stays there too. >>>> + /* Put TW into bind hash. Original socket stays there too. >>>> Note, that any socket with inet->num != 0 MUST be bound in >>>> binding cache, even if it is closed. >>>> */ >>>> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, >>>> >>>> spin_lock(lock); >>>> >>>> - /* Step 2: Hash TW into tcp ehash chain */ >>>> - inet_twsk_add_node_rcu(tw, &ehead->chain); >>>> - >>>> - /* Step 3: Remove SK from hash chain */ >>>> - if (__sk_nulls_del_node_init_rcu(sk)) >>>> - sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); >>>> - >>>> - >>>> /* Ensure above writes are committed into memory before updating the >>>> * refcount. >>>> * Provides ordering vs later refcount_inc(). >>>> @@ -162,6 +154,11 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, >>>> */ >>>> refcount_set(&tw->tw_refcnt, 3); >>>> >>>> + if (hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node)) >>>> + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); >>>> + else >>>> + inet_twsk_add_node_rcu(tw, &ehead->chain); >>> When hlist_nulls_replace_init_rcu() returns false ? >> When hlist_nulls_replace_init_rcu() returns false, it means >> sk is unhashed, > and how does this happen ? > > Here is under lock_sock() I think, for example, you can > find a lockdep annotation in the path: > > tcp_time_wait_init > tp->af_specific->md5_lookup / tcp_v4_md5_lookup > tcp_md5_do_lookup > __tcp_md5_do_lookup > rcu_dereference_check(tp->md5sig_info, lockdep_sock_is_held(sk)); > > So, is there a path that unhashes socket without holding > lock_sock() ? > I'm not entirely sure about this point yet, because inet_unhash() is called in too many places and uses __sk_nulls_del_node_init_rcu() to unhash sockets without explicitly requiring bh_lock_sock(). Until I can verify this, I'll keep the original check for old socket unhashed state to ensure safety. It would be great if you could confirm this behavior. Thanks Xuanqiang. >> the replacement operation failed, we need >> to insert tw, and this doesn't change the original logic. >> >>>> + >>>> inet_twsk_schedule(tw, timeo); >>>> >>>> spin_unlock(lock); >>>> -- >>>> 2.25.1 >>>> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() 2025-09-18 8:32 ` luoxuanqiang @ 2025-09-19 8:38 ` Kuniyuki Iwashima 0 siblings, 0 replies; 15+ messages in thread From: Kuniyuki Iwashima @ 2025-09-19 8:38 UTC (permalink / raw) To: luoxuanqiang Cc: edumazet, kerneljasonxing, davem, kuba, netdev, Xuanqiang Luo On Thu, Sep 18, 2025 at 1:33 AM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: > > > 在 2025/9/17 12:36, Kuniyuki Iwashima 写道: > > On Tue, Sep 16, 2025 at 8:27 PM luoxuanqiang <xuanqiang.luo@linux.dev> wrote: > >> > >> 在 2025/9/17 03:48, Kuniyuki Iwashima 写道: > >>> On Tue, Sep 16, 2025 at 3:31 AM <xuanqiang.luo@linux.dev> wrote: > >>>> From: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > >>>> > >>>> Since ehash lookups are lockless, if another CPU is converting sk to tw > >>>> concurrently, fetching the newly inserted tw with tw->tw_refcnt == 0 cause > >>>> lookup failure. > >>>> > >>>> The call trace map is drawn as follows: > >>>> CPU 0 CPU 1 > >>>> ----- ----- > >>>> inet_twsk_hashdance_schedule() > >>>> spin_lock() > >>>> inet_twsk_add_node_rcu(tw, ...) > >>>> __inet_lookup_established() > >>>> (find tw, failure due to tw_refcnt = 0) > >>>> __sk_nulls_del_node_init_rcu(sk) > >>>> refcount_set(&tw->tw_refcnt, 3) > >>>> spin_unlock() > >>>> > >>>> By replacing sk with tw atomically via hlist_nulls_replace_init_rcu() after > >>>> setting tw_refcnt, we ensure that tw is either fully initialized or not > >>>> visible to other CPUs, eliminating the race. > >>>> > >>>> Fixes: 3ab5aee7fe84 ("net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls") > >>>> Signed-off-by: Xuanqiang Luo <luoxuanqiang@kylinos.cn> > >>>> --- > >>>> net/ipv4/inet_timewait_sock.c | 15 ++++++--------- > >>>> 1 file changed, 6 insertions(+), 9 deletions(-) > >>>> > >>>> diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c > >>>> index 5b5426b8ee92..1ba20c4cb73b 100644 > >>>> --- a/net/ipv4/inet_timewait_sock.c > >>>> +++ b/net/ipv4/inet_timewait_sock.c > >>>> @@ -116,7 +116,7 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > >>>> spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash); > >>>> struct inet_bind_hashbucket *bhead, *bhead2; > >>>> > >>>> - /* Step 1: Put TW into bind hash. Original socket stays there too. > >>>> + /* Put TW into bind hash. Original socket stays there too. > >>>> Note, that any socket with inet->num != 0 MUST be bound in > >>>> binding cache, even if it is closed. > >>>> */ > >>>> @@ -140,14 +140,6 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > >>>> > >>>> spin_lock(lock); > >>>> > >>>> - /* Step 2: Hash TW into tcp ehash chain */ > >>>> - inet_twsk_add_node_rcu(tw, &ehead->chain); > >>>> - > >>>> - /* Step 3: Remove SK from hash chain */ > >>>> - if (__sk_nulls_del_node_init_rcu(sk)) > >>>> - sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); > >>>> - > >>>> - > >>>> /* Ensure above writes are committed into memory before updating the > >>>> * refcount. > >>>> * Provides ordering vs later refcount_inc(). > >>>> @@ -162,6 +154,11 @@ void inet_twsk_hashdance_schedule(struct inet_timewait_sock *tw, > >>>> */ > >>>> refcount_set(&tw->tw_refcnt, 3); > >>>> > >>>> + if (hlist_nulls_replace_init_rcu(&sk->sk_nulls_node, &tw->tw_node)) > >>>> + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1); > >>>> + else > >>>> + inet_twsk_add_node_rcu(tw, &ehead->chain); > >>> When hlist_nulls_replace_init_rcu() returns false ? > >> When hlist_nulls_replace_init_rcu() returns false, it means > >> sk is unhashed, > > and how does this happen ? > > > > Here is under lock_sock() I think, for example, you can > > find a lockdep annotation in the path: > > > > tcp_time_wait_init > > tp->af_specific->md5_lookup / tcp_v4_md5_lookup > > tcp_md5_do_lookup > > __tcp_md5_do_lookup > > rcu_dereference_check(tp->md5sig_info, lockdep_sock_is_held(sk)); > > > > So, is there a path that unhashes socket without holding > > lock_sock() ? > > > I'm not entirely sure about this point yet, because > inet_unhash() is called in too many places and uses > __sk_nulls_del_node_init_rcu() to unhash sockets without > explicitly requiring bh_lock_sock(). > > Until I can verify this, I'll keep the original check > for old socket unhashed state to ensure safety. > > It would be great if you could confirm this behavior. See: https://lore.kernel.org/netdev/20250919083706.1863217-4-kuniyu@google.com/ ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-09-19 8:38 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-09-16 10:30 [PATCH net-next v3 0/3] net: Avoid ehash lookup races xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 1/3] rculist: Add __hlist_nulls_replace_rcu() and hlist_nulls_replace_init_rcu() xuanqiang.luo 2025-09-16 18:58 ` Kuniyuki Iwashima 2025-09-17 3:26 ` luoxuanqiang 2025-09-17 4:27 ` Kuniyuki Iwashima 2025-09-17 4:43 ` Kuniyuki Iwashima 2025-09-18 6:09 ` luoxuanqiang 2025-09-18 6:09 ` luoxuanqiang 2025-09-16 10:30 ` [PATCH net-next v3 2/3] inet: Avoid ehash lookup race in inet_ehash_insert() xuanqiang.luo 2025-09-16 10:30 ` [PATCH net-next v3 3/3] inet: Avoid ehash lookup race in inet_twsk_hashdance_schedule() xuanqiang.luo 2025-09-16 19:48 ` Kuniyuki Iwashima 2025-09-17 3:26 ` luoxuanqiang 2025-09-17 4:36 ` Kuniyuki Iwashima 2025-09-18 8:32 ` luoxuanqiang 2025-09-19 8:38 ` Kuniyuki Iwashima
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).