* [PATCH v5 1/6] sched_ext: Relocate scx_enabled() related code
2024-12-16 3:11 [PATCH v5 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
@ 2024-12-16 3:11 ` Changwoo Min
2024-12-16 3:11 ` [PATCH v5 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
` (4 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 3:11 UTC (permalink / raw)
To: tj, void, mingo, peterz, arighi; +Cc: kernel-dev, linux-kernel, Changwoo Min
scx_enabled() will be used in scx_rq_clock_update/invalidate()
in the following patch, so relocate the scx_enabled() related code
to the proper location.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
kernel/sched/sched.h | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 76f5f53a645f..440ecedf871b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1717,6 +1717,19 @@ struct rq_flags {
extern struct balance_callback balance_push_callback;
+#ifdef CONFIG_SCHED_CLASS_EXT
+extern const struct sched_class ext_sched_class;
+
+DECLARE_STATIC_KEY_FALSE(__scx_ops_enabled); /* SCX BPF scheduler loaded */
+DECLARE_STATIC_KEY_FALSE(__scx_switched_all); /* all fair class tasks on SCX */
+
+#define scx_enabled() static_branch_unlikely(&__scx_ops_enabled)
+#define scx_switched_all() static_branch_unlikely(&__scx_switched_all)
+#else /* !CONFIG_SCHED_CLASS_EXT */
+#define scx_enabled() false
+#define scx_switched_all() false
+#endif /* !CONFIG_SCHED_CLASS_EXT */
+
/*
* Lockdep annotation that avoids accidental unlocks; it's like a
* sticky/continuous lockdep_assert_held().
@@ -2505,19 +2518,6 @@ extern const struct sched_class rt_sched_class;
extern const struct sched_class fair_sched_class;
extern const struct sched_class idle_sched_class;
-#ifdef CONFIG_SCHED_CLASS_EXT
-extern const struct sched_class ext_sched_class;
-
-DECLARE_STATIC_KEY_FALSE(__scx_ops_enabled); /* SCX BPF scheduler loaded */
-DECLARE_STATIC_KEY_FALSE(__scx_switched_all); /* all fair class tasks on SCX */
-
-#define scx_enabled() static_branch_unlikely(&__scx_ops_enabled)
-#define scx_switched_all() static_branch_unlikely(&__scx_switched_all)
-#else /* !CONFIG_SCHED_CLASS_EXT */
-#define scx_enabled() false
-#define scx_switched_all() false
-#endif /* !CONFIG_SCHED_CLASS_EXT */
-
/*
* Iterate only active classes. SCX can take over all fair tasks or be
* completely disabled. If the former, skip fair. If the latter, skip SCX.
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v5 2/6] sched_ext: Implement scx_bpf_now_ns()
2024-12-16 3:11 [PATCH v5 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
2024-12-16 3:11 ` [PATCH v5 1/6] sched_ext: Relocate scx_enabled() related code Changwoo Min
@ 2024-12-16 3:11 ` Changwoo Min
2024-12-16 3:11 ` [PATCH v5 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler Changwoo Min
` (3 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 3:11 UTC (permalink / raw)
To: tj, void, mingo, peterz, arighi; +Cc: kernel-dev, linux-kernel, Changwoo Min
Returns a high-performance monotonically non-decreasing clock for the current
CPU. The clock returned is in nanoseconds.
It provides the following properties:
1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently
to account for execution time and track tasks' runtime properties.
Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which
eventually reads a hardware timestamp counter -- is neither performant nor
scalable. scx_bpf_now_ns() aims to provide a high-performance clock by
using the rq clock in the scheduler core whenever possible.
2) High enough resolution for the BPF scheduler use cases: In most BPF
scheduler use cases, the required clock resolution is lower than the most
accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now_ns() basically
uses the rq clock in the scheduler core whenever it is valid. It considers
that the rq clock is valid from the time the rq clock is updated
(update_rq_clock) until the rq is unlocked (rq_unpin_lock).
3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now_ns()
guarantees the clock never goes backward when comparing them in the same
CPU. On the other hand, when comparing clocks in different CPUs, there
is no such guarantee -- the clock can go backward. It provides a
monotonically *non-decreasing* clock so that it would provide the same
clock values in two different scx_bpf_now_ns() calls in the same CPU
during the same period of when the rq clock is valid.
An rq clock becomes valid when it is updated using update_rq_clock()
and invalidated when the rq is unlocked using rq_unpin_lock().
Let's suppose the following timeline in the scheduler core:
T1. rq_lock(rq)
T2. update_rq_clock(rq)
T3. a sched_ext BPF operation
T4. rq_unlock(rq)
T5. a sched_ext BPF operation
T6. rq_lock(rq)
T7. update_rq_clock(rq)
For [T2, T4), we consider that rq clock is valid (SCX_RQ_CLK_VALID is
set), so scx_bpf_now_ns() calls during [T2, T4) (including T3) will
return the rq clock updated at T2. For duration [T4, T7), when a BPF
scheduler can still call scx_bpf_now_ns() (T5), we consider the rq clock
is invalid (SCX_RQ_CLK_VALID is unset at T4). So when calling
scx_bpf_now_ns() at T5, we will return a fresh clock value by calling
sched_clock_cpu() internally. Also, to prevent getting outdated rq clocks
from a previous scx scheduler, invalidate all the rq clocks when unloading
a BPF scheduler.
One example of calling scx_bpf_now_ns(), when the rq clock is invalid
(like T5), is in scx_central [1]. The scx_central scheduler uses a BPF
timer for preemptive scheduling. In every msec, the timer callback checks
if the currently running tasks exceed their timeslice. At the beginning of
the BPF timer callback (central_timerfn in scx_central.bpf.c), scx_central
gets the current time. When the BPF timer callback runs, the rq clock could
be invalid, the same as T5. In this case, scx_bpf_now_ns() returns a fresh
clock value rather than returning the old one (T2).
[1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_central.bpf.c
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
kernel/sched/core.c | 6 ++-
kernel/sched/ext.c | 121 ++++++++++++++++++++++++++++++++++++++++++-
kernel/sched/sched.h | 28 +++++++++-
3 files changed, 151 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95e40895a519..b0191977d919 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -789,6 +789,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
void update_rq_clock(struct rq *rq)
{
s64 delta;
+ u64 clock;
lockdep_assert_rq_held(rq);
@@ -800,11 +801,14 @@ void update_rq_clock(struct rq *rq)
SCHED_WARN_ON(rq->clock_update_flags & RQCF_UPDATED);
rq->clock_update_flags |= RQCF_UPDATED;
#endif
+ clock = sched_clock_cpu(cpu_of(rq));
+ scx_rq_clock_update(rq, clock, true);
- delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
+ delta = clock - rq->clock;
if (delta < 0)
return;
rq->clock += delta;
+
update_rq_clock_task(rq, delta);
}
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index afeed9dfeecb..991a86e87187 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -4910,7 +4910,7 @@ static void scx_ops_disable_workfn(struct kthread_work *work)
struct task_struct *p;
struct rhashtable_iter rht_iter;
struct scx_dispatch_q *dsq;
- int i, kind;
+ int i, kind, cpu;
kind = atomic_read(&scx_exit_kind);
while (true) {
@@ -4993,6 +4993,15 @@ static void scx_ops_disable_workfn(struct kthread_work *work)
scx_task_iter_stop(&sti);
percpu_up_write(&scx_fork_rwsem);
+ /*
+ * Invalidate all the rq clocks to prevent getting outdated
+ * rq clocks from a previous scx scheduler.
+ */
+ for_each_possible_cpu(cpu) {
+ struct rq *rq = cpu_rq(cpu);
+ scx_rq_clock_invalidate(rq);
+ }
+
/* no task is on scx, turn off all the switches and flush in-progress calls */
static_branch_disable(&__scx_ops_enabled);
for (i = SCX_OPI_BEGIN; i < SCX_OPI_END; i++)
@@ -7601,6 +7610,115 @@ __bpf_kfunc struct cgroup *scx_bpf_task_cgroup(struct task_struct *p)
}
#endif
+/**
+ * scx_bpf_now_ns - Returns a high-performance monotonically non-decreasing
+ * clock for the current CPU. The clock returned is in nanoseconds.
+ *
+ * It provides the following properties:
+ *
+ * 1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently
+ * to account for execution time and track tasks' runtime properties.
+ * Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which
+ * eventually reads a hardware timestamp counter -- is neither performant nor
+ * scalable. scx_bpf_now_ns() aims to provide a high-performance clock by
+ * using the rq clock in the scheduler core whenever possible.
+ *
+ * 2) High enough resolution for the BPF scheduler use cases: In most BPF
+ * scheduler use cases, the required clock resolution is lower than the most
+ * accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now_ns() basically
+ * uses the rq clock in the scheduler core whenever it is valid. It considers
+ * that the rq clock is valid from the time the rq clock is updated
+ * (update_rq_clock) until the rq is unlocked (rq_unpin_lock).
+ *
+ * 3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now_ns()
+ * guarantees the clock never goes backward when comparing them in the same
+ * CPU. On the other hand, when comparing clocks in different CPUs, there
+ * is no such guarantee -- the clock can go backward. It provides a
+ * monotonically *non-decreasing* clock so that it would provide the same
+ * clock values in two different scx_bpf_now_ns() calls in the same CPU
+ * during the same period of when the rq clock is valid.
+ */
+__bpf_kfunc u64 scx_bpf_now_ns(void)
+{
+ struct rq *rq;
+ u64 clock;
+
+ preempt_disable();
+
+ /*
+ * If the rq clock is valid, use the cached rq clock
+ * whenever the clock does not go backward.
+ */
+ rq = this_rq();
+ clock = READ_ONCE(rq->scx.clock);
+
+ if (!(READ_ONCE(rq->scx.flags) & SCX_RQ_CLK_VALID) ||
+ time_after_eq64(READ_ONCE(rq->scx.prev_clock), clock)) {
+ /*
+ * If the rq clock is invalid or goes backward,
+ * start a new rq clock period with a fresh sched_clock_cpu().
+ *
+ * The cached rq clock can go backward because this kfunc can
+ * be called from anywhere, including timer interrupts and
+ * tracepoints for code running in the IRQ context.
+ *
+ * For example, suppose a timer interrupt occurred while
+ * running scx_bpf_now_ns() *after* reading the rq clock and
+ * *before* comparing the if condition.
+ *
+ * The timer interrupt will eventually call a BPF scheduler's
+ * ops.tick(), and the BPF scheduler can call scx_bpf_now_ns().
+ * Since the scheduler core updates the rq clock before calling
+ * ops.tick(), the scx_bpf_now_ns() call will get the fresh
+ * clock.
+ *
+ * After handling the timer interrupt, the interrupted
+ * scx_bpf_now_ns() will be resumed, so the if condition will
+ * be compared.
+ *
+ * In this case, the clock read before the timer interrupt
+ * will be the same as rq->scx.prev_clock.
+ *
+ * When such a case is detected (i.e., prev_clock is smaller
+ * or equal to the current clock), get a new rq clock with a
+ * fresh sched_clock_cpu().
+ *
+ * The following illustrates the detailed sequence over time:
+ *
+ * Time| Task | Timer interrupt
+ * ----+------------------------+------------------------------
+ * T1 | call scx_bpf_now_ns() |
+ * T2 | load rq->scx.clock |
+ * T3 | compare rq->scx.flags |
+ * T4 | | timer interrupt occured
+ * T5 | | sched_tick() called
+ * T6 | | + rq_lock() called
+ * T7 | | + update_rq_clock() called
+ * T8 | | + sched_class->task_tick()
+ * T9 | | + scx_bpf_now_ns() returns
+ * | | a fresh clock updated at T7.
+ * T10 | | + rq_unlock() called
+ * T11 | resume the execution |
+ * T12 | compare scx.prev_clock |
+ *
+ * Note that since the scx.prev_clock at T12 was updated at T7,
+ * it will be the same as rq->scx.clock at T2.
+ */
+ clock = sched_clock_cpu(cpu_of(rq));
+
+ /*
+ * The rq clock is updated outside of the rq lock.
+ * In this case, keep the updated rq clock invalid so the next
+ * kfunc call outside the rq lock gets a fresh rq clock.
+ */
+ scx_rq_clock_update(rq, clock, false);
+ }
+
+ preempt_enable();
+
+ return clock;
+}
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(scx_kfunc_ids_any)
@@ -7632,6 +7750,7 @@ BTF_ID_FLAGS(func, scx_bpf_cpu_rq)
#ifdef CONFIG_CGROUP_SCHED
BTF_ID_FLAGS(func, scx_bpf_task_cgroup, KF_RCU | KF_ACQUIRE)
#endif
+BTF_ID_FLAGS(func, scx_bpf_now_ns)
BTF_KFUNCS_END(scx_kfunc_ids_any)
static const struct btf_kfunc_id_set scx_kfunc_set_any = {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 440ecedf871b..892975543a6d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -754,6 +754,7 @@ enum scx_rq_flags {
SCX_RQ_BAL_PENDING = 1 << 2, /* balance hasn't run yet */
SCX_RQ_BAL_KEEP = 1 << 3, /* balance decided to keep current */
SCX_RQ_BYPASSING = 1 << 4,
+ SCX_RQ_CLK_VALID = 1 << 5, /* RQ clock is fresh and valid */
SCX_RQ_IN_WAKEUP = 1 << 16,
SCX_RQ_IN_BALANCE = 1 << 17,
@@ -766,9 +767,11 @@ struct scx_rq {
unsigned long ops_qseq;
u64 extra_enq_flags; /* see move_task_to_local_dsq() */
u32 nr_running;
- u32 flags;
u32 cpuperf_target; /* [0, SCHED_CAPACITY_SCALE] */
bool cpu_released;
+ u32 flags;
+ u64 clock; /* current per-rq clock -- see scx_bpf_now_ns() */
+ u64 prev_clock; /* previous per-rq clock -- see scx_bpf_now_ns() */
cpumask_var_t cpus_to_kick;
cpumask_var_t cpus_to_kick_if_idle;
cpumask_var_t cpus_to_preempt;
@@ -1725,9 +1728,30 @@ DECLARE_STATIC_KEY_FALSE(__scx_switched_all); /* all fair class tasks on SCX */
#define scx_enabled() static_branch_unlikely(&__scx_ops_enabled)
#define scx_switched_all() static_branch_unlikely(&__scx_switched_all)
+
+static inline void scx_rq_clock_update(struct rq *rq, u64 clock, bool valid)
+{
+ if (!scx_enabled())
+ return;
+ WRITE_ONCE(rq->scx.prev_clock, rq->scx.clock);
+ WRITE_ONCE(rq->scx.clock, clock);
+ if (valid)
+ WRITE_ONCE(rq->scx.flags, rq->scx.flags | SCX_RQ_CLK_VALID);
+}
+
+static inline void scx_rq_clock_invalidate(struct rq *rq)
+{
+ if (!scx_enabled())
+ return;
+ WRITE_ONCE(rq->scx.flags, rq->scx.flags & ~SCX_RQ_CLK_VALID);
+}
+
#else /* !CONFIG_SCHED_CLASS_EXT */
#define scx_enabled() false
#define scx_switched_all() false
+
+static inline void scx_rq_clock_update(struct rq *rq, u64 clock, bool valid) {}
+static inline void scx_rq_clock_invalidate(struct rq *rq) {}
#endif /* !CONFIG_SCHED_CLASS_EXT */
/*
@@ -1759,7 +1783,7 @@ static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf)
if (rq->clock_update_flags > RQCF_ACT_SKIP)
rf->clock_update_flags = RQCF_UPDATED;
#endif
-
+ scx_rq_clock_invalidate(rq);
lockdep_unpin_lock(__rq_lockp(rq), rf->cookie);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v5 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler
2024-12-16 3:11 [PATCH v5 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
2024-12-16 3:11 ` [PATCH v5 1/6] sched_ext: Relocate scx_enabled() related code Changwoo Min
2024-12-16 3:11 ` [PATCH v5 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
@ 2024-12-16 3:11 ` Changwoo Min
2024-12-16 3:11 ` [PATCH v5 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 3:11 UTC (permalink / raw)
To: tj, void, mingo, peterz, arighi; +Cc: kernel-dev, linux-kernel, Changwoo Min
scx_bpf_now_ns() is added to the header files so the BPF scheduler
can use it.
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
tools/sched_ext/include/scx/common.bpf.h | 1 +
tools/sched_ext/include/scx/compat.bpf.h | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 858ba1f438f6..79f0798a5350 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -76,6 +76,7 @@ bool scx_bpf_task_running(const struct task_struct *p) __ksym;
s32 scx_bpf_task_cpu(const struct task_struct *p) __ksym;
struct rq *scx_bpf_cpu_rq(s32 cpu) __ksym;
struct cgroup *scx_bpf_task_cgroup(struct task_struct *p) __ksym __weak;
+u64 scx_bpf_now_ns(void) __ksym __weak;
/*
* Use the following as @it__iter when calling scx_bpf_dsq_move[_vtime]() from
diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h
index d56520100a26..2f16f30ae741 100644
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -125,6 +125,11 @@ bool scx_bpf_dispatch_vtime_from_dsq___compat(struct bpf_iter_scx_dsq *it__iter,
false; \
})
+#define scx_bpf_now_ns() \
+ (bpf_ksym_exists(scx_bpf_now_ns) ? \
+ scx_bpf_now_ns() : \
+ bpf_ktime_get_ns())
+
/*
* Define sched_ext_ops. This may be expanded to define multiple variants for
* backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH().
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v5 4/6] sched_ext: Add time helpers for BPF schedulers
2024-12-16 3:11 [PATCH v5 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
` (2 preceding siblings ...)
2024-12-16 3:11 ` [PATCH v5 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler Changwoo Min
@ 2024-12-16 3:11 ` Changwoo Min
2024-12-16 7:36 ` Andrea Righi
2024-12-16 3:11 ` [PATCH v5 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns() Changwoo Min
2024-12-16 3:11 ` [PATCH v5 6/6] sched_ext: Use time helpers in BPF schedulers Changwoo Min
5 siblings, 1 reply; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 3:11 UTC (permalink / raw)
To: tj, void, mingo, peterz, arighi; +Cc: kernel-dev, linux-kernel, Changwoo Min
The following functions are added for BPF schedulers:
- vtime_delta(after, before)
- vtime_after(a, b)
- vtime_before(a, b)
- vtime_after_eq(a, b)
- vtime_before_eq(a, b)
- vtime_in_range(a, b, c)
- vtime_in_range_open(a, b, c)
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
tools/sched_ext/include/scx/common.bpf.h | 94 ++++++++++++++++++++++++
1 file changed, 94 insertions(+)
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 79f0798a5350..923bbf57e4f1 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -408,6 +408,100 @@ static __always_inline const struct cpumask *cast_mask(struct bpf_cpumask *mask)
void bpf_rcu_read_lock(void) __ksym;
void bpf_rcu_read_unlock(void) __ksym;
+/*
+ * Time helpers, most of which are from jiffies.h.
+ */
+
+/**
+ * vtime_delta - Calculate the delta between new and old time stamp
+ * @after: first comparable as u64
+ * @before: second comparable as u64
+ *
+ * Return: the time difference, which is >= 0
+ */
+static inline s64 vtime_delta(u64 after, u64 before)
+{
+ return (s64)(after - before) > 0 ? : 0;
+}
+
+/**
+ * vtime_after - returns true if the time a is after time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Do this with "<0" and ">=0" to only test the sign of the result. A
+ * good compiler would generate better code (and a really good compiler
+ * wouldn't care). Gcc is currently neither.
+ *
+ * Return: %true is time a is after time b, otherwise %false.
+ */
+static inline bool vtime_after(u64 a, u64 b)
+{
+ return (s64)(b - a) < 0;
+}
+
+/**
+ * vtime_before - returns true if the time a is before time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Return: %true is time a is before time b, otherwise %false.
+ */
+static inline bool vtime_before(u64 a, u64 b)
+{
+ return vtime_after(b, a);
+}
+
+/**
+ * vtime_after_eq - returns true if the time a is after or the same as time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Return: %true is time a is after or the same as time b, otherwise %false.
+ */
+static inline bool vtime_after_eq(u64 a, u64 b)
+{
+ return (s64)(a - b) >= 0;
+}
+
+/**
+ * vtime_before_eq - returns true if the time a is before or the same as time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Return: %true is time a is before or the same as time b, otherwise %false.
+ */
+static inline bool vtime_before_eq(u64 a, u64 b)
+{
+ return vtime_after_eq(b, a);
+}
+
+/**
+ * vtime_in_range - Calculate whether a is in the range of [b, c].
+ * @a: time to test
+ * @b: beginning of the range
+ * @c: end of the range
+ *
+ * Return: %true is time a is in the range [b, c], otherwise %false.
+ */
+static inline bool vtime_in_range(u64 a, u64 b, u64 c)
+{
+ return vtime_after_eq(a, b) && vtime_before_eq(a, c);
+}
+
+/**
+ * vtime_in_range_open - Calculate whether a is in the range of [b, c).
+ * @a: time to test
+ * @b: beginning of the range
+ * @c: end of the range
+ *
+ * Return: %true is time a is in the range [b, c), otherwise %false.
+ */
+static inline bool vtime_in_range_open(u64 a, u64 b, u64 c)
+{
+ return vtime_after_eq(a, b) && vtime_before(a, c);
+}
+
/*
* Other helpers
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v5 4/6] sched_ext: Add time helpers for BPF schedulers
2024-12-16 3:11 ` [PATCH v5 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
@ 2024-12-16 7:36 ` Andrea Righi
2024-12-16 15:02 ` Changwoo Min
0 siblings, 1 reply; 10+ messages in thread
From: Andrea Righi @ 2024-12-16 7:36 UTC (permalink / raw)
To: Changwoo Min
Cc: tj, void, mingo, peterz, kernel-dev, linux-kernel, Changwoo Min
Hi Changwoo,
On Mon, Dec 16, 2024 at 12:11:42PM +0900, Changwoo Min wrote:
> The following functions are added for BPF schedulers:
> - vtime_delta(after, before)
> - vtime_after(a, b)
> - vtime_before(a, b)
> - vtime_after_eq(a, b)
> - vtime_before_eq(a, b)
> - vtime_in_range(a, b, c)
> - vtime_in_range_open(a, b, c)
Considering that we sync these headers and sched examples from the scx repo
(see https://github.com/sched-ext/scx/blob/main/scheds/sync-to-kernel.sh),
maybe we could have a corresponding change there as well and include a link
to the PR here (as "Link: ...").
Moreover, tis particular change doesn't require to have the rest of the
patch set applied to the kernel, so a PR can be created even now. In this
way we don't risk to get out of sync.
What do you think?
Thanks,
-Andrea
>
> Signed-off-by: Changwoo Min <changwoo@igalia.com>
> ---
> tools/sched_ext/include/scx/common.bpf.h | 94 ++++++++++++++++++++++++
> 1 file changed, 94 insertions(+)
>
> diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
> index 79f0798a5350..923bbf57e4f1 100644
> --- a/tools/sched_ext/include/scx/common.bpf.h
> +++ b/tools/sched_ext/include/scx/common.bpf.h
> @@ -408,6 +408,100 @@ static __always_inline const struct cpumask *cast_mask(struct bpf_cpumask *mask)
> void bpf_rcu_read_lock(void) __ksym;
> void bpf_rcu_read_unlock(void) __ksym;
>
> +/*
> + * Time helpers, most of which are from jiffies.h.
> + */
> +
> +/**
> + * vtime_delta - Calculate the delta between new and old time stamp
> + * @after: first comparable as u64
> + * @before: second comparable as u64
> + *
> + * Return: the time difference, which is >= 0
> + */
> +static inline s64 vtime_delta(u64 after, u64 before)
> +{
> + return (s64)(after - before) > 0 ? : 0;
> +}
> +
> +/**
> + * vtime_after - returns true if the time a is after time b.
> + * @a: first comparable as u64
> + * @b: second comparable as u64
> + *
> + * Do this with "<0" and ">=0" to only test the sign of the result. A
> + * good compiler would generate better code (and a really good compiler
> + * wouldn't care). Gcc is currently neither.
> + *
> + * Return: %true is time a is after time b, otherwise %false.
> + */
> +static inline bool vtime_after(u64 a, u64 b)
> +{
> + return (s64)(b - a) < 0;
> +}
> +
> +/**
> + * vtime_before - returns true if the time a is before time b.
> + * @a: first comparable as u64
> + * @b: second comparable as u64
> + *
> + * Return: %true is time a is before time b, otherwise %false.
> + */
> +static inline bool vtime_before(u64 a, u64 b)
> +{
> + return vtime_after(b, a);
> +}
> +
> +/**
> + * vtime_after_eq - returns true if the time a is after or the same as time b.
> + * @a: first comparable as u64
> + * @b: second comparable as u64
> + *
> + * Return: %true is time a is after or the same as time b, otherwise %false.
> + */
> +static inline bool vtime_after_eq(u64 a, u64 b)
> +{
> + return (s64)(a - b) >= 0;
> +}
> +
> +/**
> + * vtime_before_eq - returns true if the time a is before or the same as time b.
> + * @a: first comparable as u64
> + * @b: second comparable as u64
> + *
> + * Return: %true is time a is before or the same as time b, otherwise %false.
> + */
> +static inline bool vtime_before_eq(u64 a, u64 b)
> +{
> + return vtime_after_eq(b, a);
> +}
> +
> +/**
> + * vtime_in_range - Calculate whether a is in the range of [b, c].
> + * @a: time to test
> + * @b: beginning of the range
> + * @c: end of the range
> + *
> + * Return: %true is time a is in the range [b, c], otherwise %false.
> + */
> +static inline bool vtime_in_range(u64 a, u64 b, u64 c)
> +{
> + return vtime_after_eq(a, b) && vtime_before_eq(a, c);
> +}
> +
> +/**
> + * vtime_in_range_open - Calculate whether a is in the range of [b, c).
> + * @a: time to test
> + * @b: beginning of the range
> + * @c: end of the range
> + *
> + * Return: %true is time a is in the range [b, c), otherwise %false.
> + */
> +static inline bool vtime_in_range_open(u64 a, u64 b, u64 c)
> +{
> + return vtime_after_eq(a, b) && vtime_before(a, c);
> +}
> +
>
> /*
> * Other helpers
> --
> 2.47.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH v5 4/6] sched_ext: Add time helpers for BPF schedulers
2024-12-16 7:36 ` Andrea Righi
@ 2024-12-16 15:02 ` Changwoo Min
0 siblings, 0 replies; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 15:02 UTC (permalink / raw)
To: Andrea Righi, Changwoo Min
Cc: tj, void, mingo, peterz, kernel-dev, linux-kernel
Hello,
On 24. 12. 16. 16:36, Andrea Righi wrote:
> Hi Changwoo,
>
> On Mon, Dec 16, 2024 at 12:11:42PM +0900, Changwoo Min wrote:
>> The following functions are added for BPF schedulers:
>> - vtime_delta(after, before)
>> - vtime_after(a, b)
>> - vtime_before(a, b)
>> - vtime_after_eq(a, b)
>> - vtime_before_eq(a, b)
>> - vtime_in_range(a, b, c)
>> - vtime_in_range_open(a, b, c)
>
> Considering that we sync these headers and sched examples from the scx repo
> (see https://github.com/sched-ext/scx/blob/main/scheds/sync-to-kernel.sh),
> maybe we could have a corresponding change there as well and include a link
> to the PR here (as "Link: ...").
>
> Moreover, tis particular change doesn't require to have the rest of the
> patch set applied to the kernel, so a PR can be created even now. In this
> way we don't risk to get out of sync.
>
> What do you think?
Thanks for the suggestion! As soon as there is a consensus,
I will submit a PR on the scx repo for the common.bpf.h and the
affected scx schedulers.
Regards,
Changwoo Min
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v5 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns()
2024-12-16 3:11 [PATCH v5 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
` (3 preceding siblings ...)
2024-12-16 3:11 ` [PATCH v5 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
@ 2024-12-16 3:11 ` Changwoo Min
2024-12-16 3:11 ` [PATCH v5 6/6] sched_ext: Use time helpers in BPF schedulers Changwoo Min
5 siblings, 0 replies; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 3:11 UTC (permalink / raw)
To: tj, void, mingo, peterz, arighi; +Cc: kernel-dev, linux-kernel, Changwoo Min
In the BPF schedulers that use bpf_ktime_get_ns() -- scx_central and
scx_flatcg, replace bpf_ktime_get_ns() calls to scx_bpf_now_ns().
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
tools/sched_ext/scx_central.bpf.c | 4 ++--
tools/sched_ext/scx_flatcg.bpf.c | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index 2907df78241e..ea1d853b9dd4 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -245,7 +245,7 @@ void BPF_STRUCT_OPS(central_running, struct task_struct *p)
s32 cpu = scx_bpf_task_cpu(p);
u64 *started_at = ARRAY_ELEM_PTR(cpu_started_at, cpu, nr_cpu_ids);
if (started_at)
- *started_at = bpf_ktime_get_ns() ?: 1; /* 0 indicates idle */
+ *started_at = scx_bpf_now_ns() ?: 1; /* 0 indicates idle */
}
void BPF_STRUCT_OPS(central_stopping, struct task_struct *p, bool runnable)
@@ -258,7 +258,7 @@ void BPF_STRUCT_OPS(central_stopping, struct task_struct *p, bool runnable)
static int central_timerfn(void *map, int *key, struct bpf_timer *timer)
{
- u64 now = bpf_ktime_get_ns();
+ u64 now = scx_bpf_now_ns();
u64 nr_to_kick = nr_queued;
s32 i, curr_cpu;
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index 3dbfa82883be..85e33e45f818 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -734,7 +734,7 @@ void BPF_STRUCT_OPS(fcg_dispatch, s32 cpu, struct task_struct *prev)
struct fcg_cpu_ctx *cpuc;
struct fcg_cgrp_ctx *cgc;
struct cgroup *cgrp;
- u64 now = bpf_ktime_get_ns();
+ u64 now = scx_bpf_now_ns();
bool picked_next = false;
cpuc = find_cpu_ctx();
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v5 6/6] sched_ext: Use time helpers in BPF schedulers
2024-12-16 3:11 [PATCH v5 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
` (4 preceding siblings ...)
2024-12-16 3:11 ` [PATCH v5 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns() Changwoo Min
@ 2024-12-16 3:11 ` Changwoo Min
2024-12-16 7:38 ` Andrea Righi
5 siblings, 1 reply; 10+ messages in thread
From: Changwoo Min @ 2024-12-16 3:11 UTC (permalink / raw)
To: tj, void, mingo, peterz, arighi; +Cc: kernel-dev, linux-kernel, Changwoo Min
Modify the BPF schedulers to use time helpers defined in common.bpf.h
Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
tools/sched_ext/scx_central.bpf.c | 5 -----
tools/sched_ext/scx_flatcg.bpf.c | 11 +++--------
tools/sched_ext/scx_simple.bpf.c | 5 -----
3 files changed, 3 insertions(+), 18 deletions(-)
diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index ea1d853b9dd4..c3b6998ea83e 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -87,11 +87,6 @@ struct {
__type(value, struct central_timer);
} central_timer SEC(".maps");
-static bool vtime_before(u64 a, u64 b)
-{
- return (s64)(a - b) < 0;
-}
-
s32 BPF_STRUCT_OPS(central_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index 85e33e45f818..2735ec25e511 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -137,11 +137,6 @@ static u64 div_round_up(u64 dividend, u64 divisor)
return (dividend + divisor - 1) / divisor;
}
-static bool vtime_before(u64 a, u64 b)
-{
- return (s64)(a - b) < 0;
-}
-
static bool cgv_node_less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
{
struct cgv_node *cgc_a, *cgc_b;
@@ -920,14 +915,14 @@ void BPF_STRUCT_OPS(fcg_cgroup_move, struct task_struct *p,
struct cgroup *from, struct cgroup *to)
{
struct fcg_cgrp_ctx *from_cgc, *to_cgc;
- s64 vtime_delta;
+ s64 delta;
/* find_cgrp_ctx() triggers scx_ops_error() on lookup failures */
if (!(from_cgc = find_cgrp_ctx(from)) || !(to_cgc = find_cgrp_ctx(to)))
return;
- vtime_delta = p->scx.dsq_vtime - from_cgc->tvtime_now;
- p->scx.dsq_vtime = to_cgc->tvtime_now + vtime_delta;
+ delta = vtime_delta(p->scx.dsq_vtime, from_cgc->tvtime_now);
+ p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
}
s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)
diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
index 31f915b286c6..6561d400ae6d 100644
--- a/tools/sched_ext/scx_simple.bpf.c
+++ b/tools/sched_ext/scx_simple.bpf.c
@@ -52,11 +52,6 @@ static void stat_inc(u32 idx)
(*cnt_p)++;
}
-static inline bool vtime_before(u64 a, u64 b)
-{
- return (s64)(a - b) < 0;
-}
-
s32 BPF_STRUCT_OPS(simple_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
{
bool is_idle = false;
--
2.47.1
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v5 6/6] sched_ext: Use time helpers in BPF schedulers
2024-12-16 3:11 ` [PATCH v5 6/6] sched_ext: Use time helpers in BPF schedulers Changwoo Min
@ 2024-12-16 7:38 ` Andrea Righi
0 siblings, 0 replies; 10+ messages in thread
From: Andrea Righi @ 2024-12-16 7:38 UTC (permalink / raw)
To: Changwoo Min
Cc: tj, void, mingo, peterz, kernel-dev, linux-kernel, Changwoo Min
On Mon, Dec 16, 2024 at 12:11:44PM +0900, Changwoo Min wrote:
> Modify the BPF schedulers to use time helpers defined in common.bpf.h
>
> Signed-off-by: Changwoo Min <changwoo@igalia.com>
Ditto (about having the same change in the scx repo), as mentioned in
PATCH 4/6.
-Andrea
> ---
> tools/sched_ext/scx_central.bpf.c | 5 -----
> tools/sched_ext/scx_flatcg.bpf.c | 11 +++--------
> tools/sched_ext/scx_simple.bpf.c | 5 -----
> 3 files changed, 3 insertions(+), 18 deletions(-)
>
> diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
> index ea1d853b9dd4..c3b6998ea83e 100644
> --- a/tools/sched_ext/scx_central.bpf.c
> +++ b/tools/sched_ext/scx_central.bpf.c
> @@ -87,11 +87,6 @@ struct {
> __type(value, struct central_timer);
> } central_timer SEC(".maps");
>
> -static bool vtime_before(u64 a, u64 b)
> -{
> - return (s64)(a - b) < 0;
> -}
> -
> s32 BPF_STRUCT_OPS(central_select_cpu, struct task_struct *p,
> s32 prev_cpu, u64 wake_flags)
> {
> diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
> index 85e33e45f818..2735ec25e511 100644
> --- a/tools/sched_ext/scx_flatcg.bpf.c
> +++ b/tools/sched_ext/scx_flatcg.bpf.c
> @@ -137,11 +137,6 @@ static u64 div_round_up(u64 dividend, u64 divisor)
> return (dividend + divisor - 1) / divisor;
> }
>
> -static bool vtime_before(u64 a, u64 b)
> -{
> - return (s64)(a - b) < 0;
> -}
> -
> static bool cgv_node_less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
> {
> struct cgv_node *cgc_a, *cgc_b;
> @@ -920,14 +915,14 @@ void BPF_STRUCT_OPS(fcg_cgroup_move, struct task_struct *p,
> struct cgroup *from, struct cgroup *to)
> {
> struct fcg_cgrp_ctx *from_cgc, *to_cgc;
> - s64 vtime_delta;
> + s64 delta;
>
> /* find_cgrp_ctx() triggers scx_ops_error() on lookup failures */
> if (!(from_cgc = find_cgrp_ctx(from)) || !(to_cgc = find_cgrp_ctx(to)))
> return;
>
> - vtime_delta = p->scx.dsq_vtime - from_cgc->tvtime_now;
> - p->scx.dsq_vtime = to_cgc->tvtime_now + vtime_delta;
> + delta = vtime_delta(p->scx.dsq_vtime, from_cgc->tvtime_now);
> + p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
> }
>
> s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)
> diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
> index 31f915b286c6..6561d400ae6d 100644
> --- a/tools/sched_ext/scx_simple.bpf.c
> +++ b/tools/sched_ext/scx_simple.bpf.c
> @@ -52,11 +52,6 @@ static void stat_inc(u32 idx)
> (*cnt_p)++;
> }
>
> -static inline bool vtime_before(u64 a, u64 b)
> -{
> - return (s64)(a - b) < 0;
> -}
> -
> s32 BPF_STRUCT_OPS(simple_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
> {
> bool is_idle = false;
> --
> 2.47.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread