From: Tejun Heo <tj@kernel.org>
To: David Vernet <void@manifault.com>,
Andrea Righi <arighi@nvidia.com>,
Changwoo Min <changwoo@igalia.com>
Cc: sched-ext@lists.linux.dev, Emil Tsalapatis <emil@etsalapatis.com>,
linux-kernel@vger.kernel.org, Tejun Heo <tj@kernel.org>
Subject: [PATCH 3/6] tools/sched_ext/include: Add missing helpers to common.bpf.h
Date: Sat, 7 Mar 2026 16:45:16 -1000 [thread overview]
Message-ID: <20260308024519.1980564-4-tj@kernel.org> (raw)
In-Reply-To: <20260308024519.1980564-1-tj@kernel.org>
Sync several helpers from the scx repo:
- bpf_cgroup_acquire() ksym declaration
- __sink() macro for hiding values from verifier precision tracking
- ctzll() count-trailing-zeros implementation
- get_prandom_u64() helper
- scx_clock_task/pelt/virt/irq() clock helpers with get_current_rq()
Signed-off-by: Tejun Heo <tj@kernel.org>
---
tools/sched_ext/include/scx/common.bpf.h | 277 +++++++++++++++++++++++
1 file changed, 277 insertions(+)
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index eba4d87345e0..a63a98a96b86 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -292,6 +292,50 @@ BPF_PROG(name, ##args)
})
#endif /* ARRAY_ELEM_PTR */
+/**
+ * __sink - Hide @expr's value from the compiler and BPF verifier
+ * @expr: The expression whose value should be opacified
+ *
+ * No-op at runtime. The empty inline assembly with a read-write constraint
+ * ("+g") has two effects at compile/verify time:
+ *
+ * 1. Compiler: treats @expr as both read and written, preventing dead-code
+ * elimination and keeping @expr (and any side effects that produced it)
+ * alive.
+ *
+ * 2. BPF verifier: forgets the precise value/range of @expr ("makes it
+ * imprecise"). The verifier normally tracks exact ranges for every register
+ * and stack slot. While useful, precision means each distinct value creates a
+ * separate verifier state. Inside loops this leads to state explosion - each
+ * iteration carries different precise values so states never merge and the
+ * verifier explores every iteration individually.
+ *
+ * Example - preventing loop state explosion::
+ *
+ * u32 nr_intersects = 0, nr_covered = 0;
+ * __sink(nr_intersects);
+ * __sink(nr_covered);
+ * bpf_for(i, 0, nr_nodes) {
+ * if (intersects(cpumask, node_mask[i]))
+ * nr_intersects++;
+ * if (covers(cpumask, node_mask[i]))
+ * nr_covered++;
+ * }
+ *
+ * Without __sink(), the verifier tracks every possible (nr_intersects,
+ * nr_covered) pair across iterations, causing "BPF program is too large". With
+ * __sink(), the values become unknown scalars so all iterations collapse into
+ * one reusable state.
+ *
+ * Example - keeping a reference alive::
+ *
+ * struct task_struct *t = bpf_task_acquire(task);
+ * __sink(t);
+ *
+ * Follows the convention from BPF selftests (bpf_misc.h).
+ */
+#define __sink(expr) asm volatile ("" : "+g"(expr))
+
/*
* BPF declarations and helpers
*/
@@ -337,6 +381,7 @@ void bpf_task_release(struct task_struct *p) __ksym;
/* cgroup */
struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) __ksym;
+struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp) __ksym;
void bpf_cgroup_release(struct cgroup *cgrp) __ksym;
struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
@@ -742,6 +787,73 @@ static inline u64 __sqrt_u64(u64 x)
return r;
}
+/*
+ * ctzll -- Counts trailing zeros in an unsigned long long. If the input value
+ * is zero, the return value is undefined.
+ */
+static inline int ctzll(u64 v)
+{
+#if (!defined(__BPF__) && defined(__SCX_TARGET_ARCH_x86)) || \
+ (defined(__BPF__) && defined(__clang_major__) && __clang_major__ >= 19)
+ /*
+ * Use the ctz builtin when: (1) building for native x86, or
+ * (2) building for BPF with clang >= 19 (BPF backend supports
+ * the intrinsic from clang 19 onward; earlier versions hit
+ * "unimplemented opcode" in the backend).
+ */
+ return __builtin_ctzll(v);
+#else
+ /*
+ * If neither the target architecture nor the toolchains support ctzll,
+ * use software-based emulation. Let's use the De Bruijn sequence-based
+ * approach to find LSB fastly. See the details of De Bruijn sequence:
+ *
+ * https://en.wikipedia.org/wiki/De_Bruijn_sequence
+ * https://www.chessprogramming.org/BitScan#De_Bruijn_Multiplication
+ */
+ const int lookup_table[64] = {
+ 0, 1, 48, 2, 57, 49, 28, 3, 61, 58, 50, 42, 38, 29, 17, 4,
+ 62, 55, 59, 36, 53, 51, 43, 22, 45, 39, 33, 30, 24, 18, 12, 5,
+ 63, 47, 56, 27, 60, 41, 37, 16, 54, 35, 52, 21, 44, 32, 23, 11,
+ 46, 26, 40, 15, 34, 20, 31, 10, 25, 14, 19, 9, 13, 8, 7, 6,
+ };
+ const u64 DEBRUIJN_CONSTANT = 0x03f79d71b4cb0a89ULL;
+ unsigned int index;
+ u64 lowest_bit;
+ const int *lt;
+
+ if (v == 0)
+ return -1;
+
+ /*
+ * Isolate the least significant bit (LSB).
+ * For example, if v = 0b...10100, then v & -v = 0b...00100
+ */
+ lowest_bit = v & -v;
+
+ /*
+ * Each isolated bit produces a unique 6-bit value, guaranteed by the
+ * De Bruijn property. Calculate a unique index into the lookup table
+ * using the magic constant and a right shift.
+ *
+ * Multiplying by the 64-bit constant "spreads out" that 1-bit into a
+ * unique pattern in the top 6 bits. This uniqueness property is
+ * exactly what a De Bruijn sequence guarantees: Every possible 6-bit
+ * pattern (in top bits) occurs exactly once for each LSB position. So,
+ * the constant 0x03f79d71b4cb0a89ULL is carefully chosen to be a
+ * De Bruijn sequence, ensuring no collisions in the table index.
+ */
+ index = (lowest_bit * DEBRUIJN_CONSTANT) >> 58;
+
+ /*
+ * Lookup in a precomputed table. No collision is guaranteed by the
+ * De Bruijn property.
+ */
+ lt = MEMBER_VPTR(lookup_table, [index]);
+ return (lt)? *lt : -1;
+#endif
+}
+
/*
* Return a value proportionally scaled to the task's weight.
*/
@@ -759,6 +871,171 @@ static inline u64 scale_by_task_weight_inverse(const struct task_struct *p, u64
}
+/*
+ * Get a random u64 from the kernel's pseudo-random generator.
+ */
+static inline u64 get_prandom_u64()
+{
+ return ((u64)bpf_get_prandom_u32() << 32) | bpf_get_prandom_u32();
+}
+
+/*
+ * Define the shadow structure to avoid a compilation error when
+ * vmlinux.h does not enable necessary kernel configs. The ___local
+ * suffix is a CO-RE convention that tells the loader to match this
+ * against the base struct rq in the kernel. The attribute
+ * preserve_access_index tells the compiler to generate a CO-RE
+ * relocation for these fields.
+ */
+struct rq___local {
+ /*
+ * A monotonically increasing clock per CPU. It is rq->clock minus
+ * cumulative IRQ time and hypervisor steal time. Unlike rq->clock,
+ * it does not advance during IRQ processing or hypervisor preemption.
+ * It does advance during idle (the idle task counts as a running task
+ * for this purpose).
+ */
+ u64 clock_task;
+ /*
+ * Invariant version of clock_task scaled by CPU capacity and
+ * frequency. For example, clock_pelt advances 2x slower on a CPU
+ * with half the capacity.
+ *
+ * At idle exit, rq->clock_pelt jumps forward to resync with
+ * clock_task. The kernel's rq_clock_pelt() corrects for this jump
+ * by subtracting lost_idle_time, yielding a clock that appears
+ * continuous across idle transitions. scx_clock_pelt() mirrors
+ * rq_clock_pelt() by performing the same subtraction.
+ */
+ u64 clock_pelt;
+ /*
+ * Accumulates the magnitude of each clock_pelt jump at idle exit.
+ * Subtracting this from clock_pelt gives rq_clock_pelt(): a
+ * continuous, capacity-invariant clock suitable for both task
+ * execution time stamping and cross-idle measurements.
+ */
+ unsigned long lost_idle_time;
+ /*
+ * Shadow of paravirt_steal_clock() (the hypervisor's cumulative
+ * stolen time counter). Stays frozen while the hypervisor preempts
+ * the vCPU; catches up the next time update_rq_clock_task() is
+ * called. The delta is the stolen time not yet subtracted from
+ * clock_task.
+ *
+ * Unlike irqtime->total (a plain kernel-side field), the live stolen
+ * time counter lives in hypervisor-specific shared memory and has no
+ * kernel-side equivalent readable from BPF in a hypervisor-agnostic
+ * way. This field is therefore the only portable BPF-accessible
+ * approximation of cumulative steal time.
+ *
+ * Available only when CONFIG_PARAVIRT_TIME_ACCOUNTING is on.
+ */
+ u64 prev_steal_time_rq;
+} __attribute__((preserve_access_index));
+
+extern struct rq runqueues __ksym;
+
+/*
+ * Define the shadow structure to avoid a compilation error when
+ * vmlinux.h does not enable necessary kernel configs.
+ */
+struct irqtime___local {
+ /*
+ * Cumulative IRQ time counter for this CPU, in nanoseconds. Advances
+ * immediately at the exit of every hardirq and non-ksoftirqd softirq
+ * via irqtime_account_irq(). ksoftirqd time is counted as normal
+ * task time and is NOT included. NMI time is also NOT included.
+ *
+ * The companion field irqtime->sync (struct u64_stats_sync) protects
+ * against 64-bit tearing on 32-bit architectures. On 64-bit kernels,
+ * u64_stats_sync is an empty struct and all seqcount operations are
+ * no-ops, so a plain BPF_CORE_READ of this field is safe.
+ *
+ * Available only when CONFIG_IRQ_TIME_ACCOUNTING is on.
+ */
+ u64 total;
+} __attribute__((preserve_access_index));
+
+/*
+ * cpu_irqtime is a per-CPU variable defined only when
+ * CONFIG_IRQ_TIME_ACCOUNTING is on. Declare it as __weak so the BPF
+ * loader sets its address to 0 (rather than failing) when the symbol
+ * is absent from the running kernel.
+ */
+extern struct irqtime___local cpu_irqtime __ksym __weak;
+
+static inline struct rq___local *get_current_rq(u32 cpu)
+{
+ /*
+ * This is a workaround to get an rq pointer since we decided to
+ * deprecate scx_bpf_cpu_rq().
+ *
+ * WARNING: The caller must hold the rq lock for @cpu. This is
+ * guaranteed when called from scheduling callbacks (ops.running,
+ * ops.stopping, ops.enqueue, ops.dequeue, ops.dispatch, etc.).
+ * There is no runtime check available in BPF for kernel spinlock
+ * state — correctness is enforced by calling context only.
+ */
+ return (void *)bpf_per_cpu_ptr(&runqueues, cpu);
+}
+
+static inline u64 scx_clock_task(u32 cpu)
+{
+ struct rq___local *rq = get_current_rq(cpu);
+
+ /* Equivalent to the kernel's rq_clock_task(). */
+ return rq ? rq->clock_task : 0;
+}
+
+static inline u64 scx_clock_pelt(u32 cpu)
+{
+ struct rq___local *rq = get_current_rq(cpu);
+
+ /*
+ * Equivalent to the kernel's rq_clock_pelt(): subtracts
+ * lost_idle_time from clock_pelt to absorb the jump that occurs
+ * when clock_pelt resyncs with clock_task at idle exit. The result
+ * is a continuous, capacity-invariant clock safe for both task
+ * execution time stamping and cross-idle measurements.
+ */
+ return rq ? (rq->clock_pelt - rq->lost_idle_time) : 0;
+}
+
+static inline u64 scx_clock_virt(u32 cpu)
+{
+ struct rq___local *rq;
+
+ /*
+ * Check field existence before calling get_current_rq() so we avoid
+ * the per_cpu lookup entirely on kernels built without
+ * CONFIG_PARAVIRT_TIME_ACCOUNTING.
+ */
+ if (!bpf_core_field_exists(((struct rq___local *)0)->prev_steal_time_rq))
+ return 0;
+
+ /* Lagging shadow of the kernel's paravirt_steal_clock(). */
+ rq = get_current_rq(cpu);
+ return rq ? BPF_CORE_READ(rq, prev_steal_time_rq) : 0;
+}
+
+static inline u64 scx_clock_irq(u32 cpu)
+{
+ struct irqtime___local *irqt;
+
+ /*
+ * bpf_core_type_exists() resolves at load time: if struct irqtime is
+ * absent from kernel BTF (CONFIG_IRQ_TIME_ACCOUNTING off), the loader
+ * patches this into an unconditional return 0, making the
+ * bpf_per_cpu_ptr() call below dead code that the verifier never sees.
+ */
+ if (!bpf_core_type_exists(struct irqtime___local))
+ return 0;
+
+ /* Equivalent to the kernel's irq_time_read(). */
+ irqt = bpf_per_cpu_ptr(&cpu_irqtime, cpu);
+ return irqt ? BPF_CORE_READ(irqt, total) : 0;
+}
+
#include "compat.bpf.h"
#include "enums.bpf.h"
--
2.53.0
next prev parent reply other threads:[~2026-03-08 2:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-08 2:45 [PATCHSET sched_ext/for-7.1] tools/sched_ext/include: Sync include files with scx repo Tejun Heo
2026-03-08 2:45 ` [PATCH 1/6] tools/sched_ext/include: Remove dead sdt_task_defs.h guard from common.h Tejun Heo
2026-03-08 2:45 ` [PATCH 2/6] tools/sched_ext/include: Sync bpf_arena_common.bpf.h with scx repo Tejun Heo
2026-03-08 2:45 ` Tejun Heo [this message]
2026-03-08 2:45 ` [PATCH 4/6] tools/sched_ext/include: Add __COMPAT_HAS_scx_bpf_select_cpu_and macro Tejun Heo
2026-03-08 2:45 ` [PATCH 5/6] tools/sched_ext/include: Add libbpf version guard for assoc_struct_ops Tejun Heo
2026-03-08 2:45 ` [PATCH 6/6] tools/sched_ext/include: Regenerate enum_defs.autogen.h Tejun Heo
2026-03-08 8:20 ` [PATCHSET sched_ext/for-7.1] tools/sched_ext/include: Sync include files with scx repo Andrea Righi
2026-03-08 8:49 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260308024519.1980564-4-tj@kernel.org \
--to=tj@kernel.org \
--cc=arighi@nvidia.com \
--cc=changwoo@igalia.com \
--cc=emil@etsalapatis.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sched-ext@lists.linux.dev \
--cc=void@manifault.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox