* [PATCH] cgroup/rstat: validate cpu before css_rstat_cpu() access
@ 2026-05-15 12:29 Qing Ming
2026-05-15 16:27 ` Tejun Heo
2026-05-16 7:08 ` [PATCH v2] " Qing Ming
0 siblings, 2 replies; 4+ messages in thread
From: Qing Ming @ 2026-05-15 12:29 UTC (permalink / raw)
To: Tejun Heo, Johannes Weiner, Michal Koutný
Cc: cgroups, bpf, linux-kernel, Qing Ming
css_rstat_updated() is exposed as a BPF kfunc and accepts a
caller-provided cpu argument. The function uses cpu for per-cpu rstat
lookups without checking whether it refers to a valid possible CPU.
A BPF iter/cgroup program with CAP_BPF and CAP_PERFMON can pass an
invalid cpu value. On an unfixed UBSCAN_BOUNDS test kernel, cpu ==
0x7fffffff triggers:
UBSAN: array-index-out-of-bounds in kernel/cgroup/rstat.c:31:9
index 2147483647 is out of range for type 'long unsigned int [64]'
Call Trace:
css_rstat_updated
bpf_iter_run_prog
cgroup_iter_seq_show
bpf_seq_read
Reject invalid cpu values before css_rstat_cpu() access.
Fixes: a319185be9f5 ("cgroup: bpf: enable bpf programs to integrate with rstat")
Signed-off-by: Qing Ming <a0yami@mailbox.org>
---
kernel/cgroup/rstat.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 150e5871e66f..b70b57b9345b 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
#include "cgroup-internal.h"
+#include <linux/cpumask.h>
#include <linux/sched/cputime.h>
#include <linux/bpf.h>
@@ -90,6 +91,9 @@ __bpf_kfunc void css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
!IS_ENABLED(CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS)) && in_nmi())
return;
+ if (unlikely(cpu < 0 || cpu >= nr_cpu_ids || !cpu_possible(cpu)))
+ return;
+
rstatc = css_rstat_cpu(css, cpu);
/*
* If already on list return. This check is racy and smp_mb() is needed
--
2.53.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] cgroup/rstat: validate cpu before css_rstat_cpu() access
2026-05-15 12:29 [PATCH] cgroup/rstat: validate cpu before css_rstat_cpu() access Qing Ming
@ 2026-05-15 16:27 ` Tejun Heo
2026-05-16 5:25 ` Ming Qing
2026-05-16 7:08 ` [PATCH v2] " Qing Ming
1 sibling, 1 reply; 4+ messages in thread
From: Tejun Heo @ 2026-05-15 16:27 UTC (permalink / raw)
To: Qing Ming; +Cc: Johannes Weiner, Michal Koutný, cgroups, bpf, linux-kernel
On Fri, May 15, 2026 at 08:29:52PM +0800, Qing Ming wrote:
> @@ -90,6 +91,9 @@ __bpf_kfunc void css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
> !IS_ENABLED(CONFIG_ARCH_HAS_NMI_SAFE_THIS_CPU_OPS)) && in_nmi())
> return;
>
> + if (unlikely(cpu < 0 || cpu >= nr_cpu_ids || !cpu_possible(cpu)))
> + return;
> +
Can you please add this validation to the BPF kfunc that's exposing it?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] cgroup/rstat: validate cpu before css_rstat_cpu() access
2026-05-15 16:27 ` Tejun Heo
@ 2026-05-16 5:25 ` Ming Qing
0 siblings, 0 replies; 4+ messages in thread
From: Ming Qing @ 2026-05-16 5:25 UTC (permalink / raw)
To: Tejun Heo; +Cc: Johannes Weiner, Michal Koutný, cgroups, bpf, linux-kernel
On Sat, May 16, 2026 at 12:27 AM +0800, Tejun Heo wrote:
> Can you please add this validation to the BPF kfunc that's exposing it?
Sure, will do. I'll split out an internal __css_rstat_updated() helper
and keep the cpu validation in the css_rstat_updated() kfunc wrapper.
Internal callers will be converted to the helper in v2.
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2] cgroup/rstat: validate cpu before css_rstat_cpu() access
2026-05-15 12:29 [PATCH] cgroup/rstat: validate cpu before css_rstat_cpu() access Qing Ming
2026-05-15 16:27 ` Tejun Heo
@ 2026-05-16 7:08 ` Qing Ming
1 sibling, 0 replies; 4+ messages in thread
From: Qing Ming @ 2026-05-16 7:08 UTC (permalink / raw)
To: Tejun Heo, Josef Bacik, Jens Axboe, Johannes Weiner,
Michal Koutný, Michal Hocko, Roman Gushchin, Shakeel Butt,
Muchun Song, Andrew Morton, Alexei Starovoitov, Hao Luo,
Yosry Ahmed
Cc: cgroups, linux-block, linux-kernel, linux-mm, bpf, Qing Ming
css_rstat_updated() is exposed as a BPF kfunc and accepts a
caller-provided cpu argument. The function uses cpu for per-cpu rstat
lookups without checking whether it refers to a valid possible CPU.
A BPF iter/cgroup program with CAP_BPF and CAP_PERFMON can pass an
invalid cpu value. On an unfixed UBSCAN_BOUNDS test kernel, cpu ==
0x7fffffff triggers:
UBSAN: array-index-out-of-bounds in kernel/cgroup/rstat.c:31:9
index 2147483647 is out of range for type 'long unsigned int [64]'
Call Trace:
css_rstat_updated
bpf_iter_run_prog
cgroup_iter_seq_show
bpf_seq_read
Add cpu validation to the BPF-facing css_rstat_updated() kfunc and
move the common implementation to __css_rstat_updated() for in-kernel
callers.
Fixes: a319185be9f5 ("cgroup: bpf: enable bpf programs to integrate with rstat")
Signed-off-by: Qing Ming <a0yami@mailbox.org>
---
v2:
- Split css_rstat_updated() into a BPF-visible wrapper and an internal
__css_rstat_updated() helper.
- Switch internal callers to __css_rstat_updated().
block/blk-cgroup.c | 2 +-
include/linux/cgroup.h | 1 +
kernel/cgroup/rstat.c | 30 ++++++++++++++++++++----------
mm/memcontrol.c | 6 +++---
4 files changed, 25 insertions(+), 14 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 554c87bb4a86..bc63bd220865 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -2241,7 +2241,7 @@ void blk_cgroup_bio_start(struct bio *bio)
}
u64_stats_update_end_irqrestore(&bis->sync, flags);
- css_rstat_updated(&blkcg->css, cpu);
+ __css_rstat_updated(&blkcg->css, cpu);
put_cpu();
}
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index f6d037a30fd8..c5648fcf74e2 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -777,6 +777,7 @@ static inline void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen)
/*
* cgroup scalable recursive statistics.
*/
+void __css_rstat_updated(struct cgroup_subsys_state *css, int cpu);
void css_rstat_updated(struct cgroup_subsys_state *css, int cpu);
void css_rstat_flush(struct cgroup_subsys_state *css);
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 150e5871e66f..ed60ba119c68 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
#include "cgroup-internal.h"
+#include <linux/cpumask.h>
#include <linux/sched/cputime.h>
#include <linux/bpf.h>
@@ -53,7 +54,7 @@ static inline struct llist_head *ss_lhead_cpu(struct cgroup_subsys *ss, int cpu)
}
/**
- * css_rstat_updated - keep track of updated rstat_cpu
+ * __css_rstat_updated - keep track of updated rstat_cpu
* @css: target cgroup subsystem state
* @cpu: cpu on which rstat_cpu was updated
*
@@ -63,20 +64,17 @@ static inline struct llist_head *ss_lhead_cpu(struct cgroup_subsys *ss, int cpu)
*
* NOTE: if the user needs the guarantee that the updater either add itself in
* the lockless list or the concurrent flusher flushes its updated stats, a
- * memory barrier is needed before the call to css_rstat_updated() i.e. a
+ * memory barrier is needed before the call to __css_rstat_updated() i.e. a
* barrier after updating the per-cpu stats and before calling
- * css_rstat_updated().
+ * __css_rstat_updated().
*/
-__bpf_kfunc void css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
+void __css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
{
struct llist_head *lhead;
struct css_rstat_cpu *rstatc;
struct llist_node *self;
- /*
- * Since bpf programs can call this function, prevent access to
- * uninitialized rstat pointers.
- */
+ /* Prevent access to uninitialized rstat pointers. */
if (!css_uses_rstat(css))
return;
@@ -125,6 +123,18 @@ __bpf_kfunc void css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
llist_add(&rstatc->lnode, lhead);
}
+/*
+ * BPF-facing wrapper for __css_rstat_updated(). Validate the caller-provided
+ * CPU before passing it to the internal rstat updater.
+ */
+__bpf_kfunc void css_rstat_updated(struct cgroup_subsys_state *css, int cpu)
+{
+ if (unlikely(cpu < 0 || cpu >= nr_cpu_ids || !cpu_possible(cpu)))
+ return;
+
+ __css_rstat_updated(css, cpu);
+}
+
static void __css_process_update_tree(struct cgroup_subsys_state *css, int cpu)
{
/* put @css and all ancestors on the corresponding updated lists */
@@ -170,7 +180,7 @@ static void css_process_update_tree(struct cgroup_subsys *ss, int cpu)
* flusher flush the stats updated by the updater who have
* observed that they are already on the list. The
* corresponding barrier pair for this one should be before
- * css_rstat_updated() by the user.
+ * __css_rstat_updated() by the user.
*
* For now, there aren't any such user, so not adding the
* barrier here but if such a use-case arise, please add
@@ -614,7 +624,7 @@ static void cgroup_base_stat_cputime_account_end(struct cgroup *cgrp,
unsigned long flags)
{
u64_stats_update_end_irqrestore(&rstatbc->bsync, flags);
- css_rstat_updated(&cgrp->self, smp_processor_id());
+ __css_rstat_updated(&cgrp->self, smp_processor_id());
put_cpu_ptr(rstatbc);
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c03d4787d466..749c128b4fad 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -679,7 +679,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, long val,
if (!val)
return;
- css_rstat_updated(&memcg->css, cpu);
+ __css_rstat_updated(&memcg->css, cpu);
statc_pcpu = memcg->vmstats_percpu;
for (; statc_pcpu; statc_pcpu = statc->parent_pcpu) {
statc = this_cpu_ptr(statc_pcpu);
@@ -2796,7 +2796,7 @@ static inline void account_slab_nmi_safe(struct mem_cgroup *memcg,
struct mem_cgroup_per_node *pn = memcg->nodeinfo[pgdat->node_id];
/* preemption is disabled in_nmi(). */
- css_rstat_updated(&memcg->css, smp_processor_id());
+ __css_rstat_updated(&memcg->css, smp_processor_id());
if (idx == NR_SLAB_RECLAIMABLE_B)
atomic_add(nr, &pn->slab_reclaimable);
else
@@ -3019,7 +3019,7 @@ static inline void account_kmem_nmi_safe(struct mem_cgroup *memcg, int val)
mod_memcg_state(memcg, MEMCG_KMEM, val);
} else {
/* preemption is disabled in_nmi(). */
- css_rstat_updated(&memcg->css, smp_processor_id());
+ __css_rstat_updated(&memcg->css, smp_processor_id());
atomic_add(val, &memcg->kmem_stat);
}
}
--
2.53.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-05-16 7:09 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 12:29 [PATCH] cgroup/rstat: validate cpu before css_rstat_cpu() access Qing Ming
2026-05-15 16:27 ` Tejun Heo
2026-05-16 5:25 ` Ming Qing
2026-05-16 7:08 ` [PATCH v2] " Qing Ming
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox