* Subject: [PATCH net] tipc: fix UAF race in tipc_mon_peer_up/down/remove_peer vs bearer teardown
@ 2026-03-29 20:23 Kai Zen
2026-03-30 5:45 ` Greg KH
0 siblings, 1 reply; 2+ messages in thread
From: Kai Zen @ 2026-03-29 20:23 UTC (permalink / raw)
To: stable, Kai Aizen
CVE-2025-40280 fixed tipc_mon_reinit_self() accessing monitors[] from a
workqueue without RTNL. That patch closed the workqueue path by adding
rtnl_lock() around the call.
However, three additional functions in the same subsystem access
tipc_net->monitors[] from softirq context with no RCU protection at all:
tipc_mon_peer_up() - called from tipc_node_write_unlock()
tipc_mon_peer_down() - called from tipc_node_write_unlock()
tipc_mon_remove_peer() - called from tipc_node_link_down()
These three are invoked from the packet receive path (tipc_rcv ->
tipc_node_write_unlock / tipc_node_link_down) and hold only the per-node
rwlock, not RTNL.
Concurrently, bearer_disable() -- which always holds RTNL per its own
inline documentation -- calls tipc_mon_delete(), which:
1. acquires mon->lock
2. sets tn->monitors[bearer_id] = NULL
3. frees all peer entries
4. releases mon->lock
5. calls kfree(mon) <-- no synchronize_rcu()
The race is structural: there is no shared lock between the data-path
reader (which reads monitors[id] then acquires mon->lock) and the
teardown path (which acquires mon->lock, NULLs the slot, then frees).
A softirq thread can read a non-NULL mon pointer, get preempted, and
resume after kfree(mon) has run on another CPU, then call
write_lock_bh(&mon->lock) on freed memory:
CPU 0 (softirq / tipc_rcv) CPU 1 (RTNL / bearer_disable)
tipc_mon_peer_up()
mon = tipc_monitor(net, id)
[mon is non-NULL]
tipc_mon_delete()
write_lock_bh(&mon->lock)
tn->monitors[id] = NULL
...
write_unlock_bh(&mon->lock)
kfree(mon)
write_lock_bh(&mon->lock) <-- UAF
The fix mirrors the existing bearer_list[] pattern in the same module:
convert monitors[] to __rcu, use rcu_assign_pointer() on creation,
RCU_INIT_POINTER() + synchronize_rcu() on deletion (before the kfree),
and the appropriate rcu_dereference_bh() vs rtnl_dereference() variant
at each read site depending on execution context.
synchronize_rcu() in tipc_mon_delete() is placed after the
write_unlock_bh() and before timer_shutdown_sync() + kfree() to ensure
all softirq-context readers that already observed the old pointer have
completed before the memory is freed.
Fixes: 35c55c9877f8 ("tipc: add neighbor monitoring framework")
Cc: stable@vger.kernel.org
Signed-off-by: Kai Aizen <kai.aizen.dev@gmail.com>
---
net/tipc/core.h | 2 +-
net/tipc/monitor.c | 51 ++++++++++++++++++++++++++++++++--------------
2 files changed, 37 insertions(+), 16 deletions(-)
diff --git a/net/tipc/core.h b/net/tipc/core.h
--- a/net/tipc/core.h
+++ b/net/tipc/core.h
@@ -109,7 +109,7 @@
u32 num_links;
/* Neighbor monitoring list */
- struct tipc_monitor *monitors[MAX_BEARERS];
+ struct tipc_monitor __rcu *monitors[MAX_BEARERS];
int mon_threshold;
/* Bearer list */
diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c
--- a/net/tipc/monitor.c
+++ b/net/tipc/monitor.c
@@ -97,9 +97,20 @@
unsigned long timer_intv;
};
-static struct tipc_monitor *tipc_monitor(struct net *net, int bearer_id)
+/*
+ * tipc_monitor_rcu_bh - dereference monitors[] from softirq / data path.
+ * Softirq context implicitly provides RCU-bh read-side protection on
+ * non-PREEMPT_RT kernels; callers on RT should hold rcu_read_lock_bh().
+ */
+static struct tipc_monitor *tipc_monitor_rcu_bh(struct net *net, int bearer_id)
+{
+ return rcu_dereference_bh(tipc_net(net)->monitors[bearer_id]);
+}
+
+/* tipc_monitor_rtnl - dereference monitors[] from RTNL-held control path. */
+static struct tipc_monitor *tipc_monitor_rtnl(struct net *net, int bearer_id)
{
- return tipc_net(net)->monitors[bearer_id];
+ return rtnl_dereference(tipc_net(net)->monitors[bearer_id]);
}
const int tipc_max_domain_size = sizeof(struct tipc_mon_domain);
@@ -194,7 +205,7 @@
static struct tipc_peer *get_self(struct net *net, int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
return mon->self;
}
@@ -351,7 +362,7 @@
void tipc_mon_remove_peer(struct net *net, u32 addr, int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
struct tipc_peer *self;
struct tipc_peer *peer, *prev, *head;
@@ -421,7 +432,7 @@
void tipc_mon_peer_up(struct net *net, u32 addr, int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
struct tipc_peer *self = get_self(net, bearer_id);
struct tipc_peer *peer, *head;
@@ -440,7 +451,7 @@
void tipc_mon_peer_down(struct net *net, u32 addr, int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
struct tipc_peer *self;
struct tipc_peer *peer, *head;
struct tipc_mon_domain *dom;
@@ -480,7 +491,7 @@
void tipc_mon_rcv(struct net *net, void *data, u16 dlen, u32 addr,
struct tipc_mon_state *state, int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
struct tipc_mon_domain *arrv_dom = data;
struct tipc_mon_domain dom_bef;
struct tipc_mon_domain *dom;
@@ -566,7 +577,7 @@
void tipc_mon_prep(struct net *net, void *data, int *dlen,
struct tipc_mon_state *state, int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
struct tipc_mon_domain *dom = data;
u16 gen = mon->dom_gen;
u16 len;
@@ -600,7 +611,7 @@
struct tipc_mon_state *state,
int bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = tipc_monitor_rcu_bh(net, bearer_id);
struct tipc_peer *peer;
if (!tipc_mon_is_active(net, mon)) {
@@ -651,7 +662,7 @@
struct tipc_peer *self;
struct tipc_mon_domain *dom;
- if (tn->monitors[bearer_id])
+ if (rtnl_dereference(tn->monitors[bearer_id]))
return 0;
mon = kzalloc_obj(*mon, GFP_ATOMIC);
@@ -663,7 +674,7 @@
kfree(dom);
return -ENOMEM;
}
- tn->monitors[bearer_id] = mon;
+ rcu_assign_pointer(tn->monitors[bearer_id], mon);
rwlock_init(&mon->lock);
mon->net = net;
mon->peer_cnt = 1;
@@ -682,7 +693,7 @@
void tipc_mon_delete(struct net *net, int bearer_id)
{
struct tipc_net *tn = tipc_net(net);
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon = rtnl_dereference(tn->monitors[bearer_id]);
struct tipc_peer *self;
struct tipc_peer *peer, *tmp;
@@ -690,8 +701,13 @@
return;
self = get_self(net, bearer_id);
+ /*
+ * Null the pointer under write lock so data-path lookups immediately
+ * return NULL, then wait for readers that already loaded the old
+ * pointer to finish before freeing.
+ */
write_lock_bh(&mon->lock);
- tn->monitors[bearer_id] = NULL;
+ RCU_INIT_POINTER(tn->monitors[bearer_id], NULL);
list_for_each_entry_safe(peer, tmp, &self->list, list) {
list_del(&peer->list);
hlist_del(&peer->hash);
@@ -700,6 +716,7 @@
}
mon->self = NULL;
write_unlock_bh(&mon->lock);
+ synchronize_rcu();
timer_shutdown_sync(&mon->timer);
kfree(self->domain);
kfree(self);
@@ -712,7 +729,7 @@
int bearer_id;
for (bearer_id = 0; bearer_id < MAX_BEARERS; bearer_id++) {
- mon = tipc_monitor(net, bearer_id);
+ mon = rtnl_dereference(tipc_net(net)->monitors[bearer_id]);
if (!mon)
continue;
write_lock_bh(&mon->lock);
@@ -798,7 +815,7 @@
int tipc_nl_add_monitor_peer(struct net *net, struct tipc_nl_msg *msg,
u32 bearer_id, u32 *prev_node)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon =
rtnl_dereference(tipc_net(net)->monitors[bearer_id]);
struct tipc_peer *peer;
if (!mon)
@@ -827,7 +844,7 @@
int __tipc_nl_add_monitor(struct net *net, struct tipc_nl_msg *msg,
u32 bearer_id)
{
- struct tipc_monitor *mon = tipc_monitor(net, bearer_id);
+ struct tipc_monitor *mon =
rtnl_dereference(tipc_net(net)->monitors[bearer_id]);
char bearer_name[TIPC_MAX_BEARER_NAME];
struct nlattr *attrs;
void *hdr;
CHEERS,
Kai Aizen @Snailsploit
^ permalink raw reply [flat|nested] 2+ messages in thread* Re: Subject: [PATCH net] tipc: fix UAF race in tipc_mon_peer_up/down/remove_peer vs bearer teardown
2026-03-29 20:23 Subject: [PATCH net] tipc: fix UAF race in tipc_mon_peer_up/down/remove_peer vs bearer teardown Kai Zen
@ 2026-03-30 5:45 ` Greg KH
0 siblings, 0 replies; 2+ messages in thread
From: Greg KH @ 2026-03-30 5:45 UTC (permalink / raw)
To: Kai Zen; +Cc: stable, Kai Aizen
On Sun, Mar 29, 2026 at 11:23:49PM +0300, Kai Zen wrote:
> CVE-2025-40280 fixed tipc_mon_reinit_self() accessing monitors[] from a
> workqueue without RTNL. That patch closed the workqueue path by adding
> rtnl_lock() around the call.
>
> However, three additional functions in the same subsystem access
> tipc_net->monitors[] from softirq context with no RCU protection at all:
>
> tipc_mon_peer_up() - called from tipc_node_write_unlock()
> tipc_mon_peer_down() - called from tipc_node_write_unlock()
> tipc_mon_remove_peer() - called from tipc_node_link_down()
>
> These three are invoked from the packet receive path (tipc_rcv ->
> tipc_node_write_unlock / tipc_node_link_down) and hold only the per-node
> rwlock, not RTNL.
>
> Concurrently, bearer_disable() -- which always holds RTNL per its own
> inline documentation -- calls tipc_mon_delete(), which:
>
> 1. acquires mon->lock
> 2. sets tn->monitors[bearer_id] = NULL
> 3. frees all peer entries
> 4. releases mon->lock
> 5. calls kfree(mon) <-- no synchronize_rcu()
>
> The race is structural: there is no shared lock between the data-path
> reader (which reads monitors[id] then acquires mon->lock) and the
> teardown path (which acquires mon->lock, NULLs the slot, then frees).
> A softirq thread can read a non-NULL mon pointer, get preempted, and
> resume after kfree(mon) has run on another CPU, then call
> write_lock_bh(&mon->lock) on freed memory:
>
> CPU 0 (softirq / tipc_rcv) CPU 1 (RTNL / bearer_disable)
> tipc_mon_peer_up()
> mon = tipc_monitor(net, id)
> [mon is non-NULL]
> tipc_mon_delete()
> write_lock_bh(&mon->lock)
> tn->monitors[id] = NULL
> ...
> write_unlock_bh(&mon->lock)
> kfree(mon)
> write_lock_bh(&mon->lock) <-- UAF
>
> The fix mirrors the existing bearer_list[] pattern in the same module:
> convert monitors[] to __rcu, use rcu_assign_pointer() on creation,
> RCU_INIT_POINTER() + synchronize_rcu() on deletion (before the kfree),
> and the appropriate rcu_dereference_bh() vs rtnl_dereference() variant
> at each read site depending on execution context.
>
> synchronize_rcu() in tipc_mon_delete() is placed after the
> write_unlock_bh() and before timer_shutdown_sync() + kfree() to ensure
> all softirq-context readers that already observed the old pointer have
> completed before the memory is freed.
>
> Fixes: 35c55c9877f8 ("tipc: add neighbor monitoring framework")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kai Aizen <kai.aizen.dev@gmail.com>
> ---
> net/tipc/core.h | 2 +-
> net/tipc/monitor.c | 51 ++++++++++++++++++++++++++++++++--------------
> 2 files changed, 37 insertions(+), 16 deletions(-)
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-03-30 5:45 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-29 20:23 Subject: [PATCH net] tipc: fix UAF race in tipc_mon_peer_up/down/remove_peer vs bearer teardown Kai Zen
2026-03-30 5:45 ` Greg KH
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox