linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock
@ 2024-12-20  6:20 Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 1/6] sched_ext: Relocate scx_enabled() related code Changwoo Min
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

Many BPF schedulers (such as scx_central, scx_lavd, scx_rusty, scx_bpfland,
and scx_flash) frequently call bpf_ktime_get_ns() for tracking tasks' runtime
properties. If supported, bpf_ktime_get_ns() eventually reads a hardware
timestamp counter (TSC). However, reading a hardware TSC is not
performant in some hardware platforms, degrading IPC.

This patchset addresses the performance problem of reading hardware TSC
by leveraging the rq clock in the scheduler core, introducing a
scx_bpf_now_ns() function for BPF schedulers. Whenever the rq clock
is fresh and valid, scx_bpf_now_ns() provides the rq clock, which is
already updated by the scheduler core (update_rq_clock), so it can reduce
reading the hardware TSC.

When the rq lock is released (rq_unpin_lock), the rq clock is invalidated,
so a subsequent scx_bpf_now_ns() call gets the fresh sched_clock for the caller.

In addition, scx_bpf_now_ns() guarantees the clock is monotonically
non-decreasing for the same CPU, so the clock cannot go backward
in the same CPU.

Using scx_bpf_now_ns() reduces the number of reading hardware TSC
by 50-80% (76% for scx_lavd, 82% for scx_bpfland, and 51% for scx_rusty)
for the following benchmark:

    perf bench -f simple sched messaging -t -g 20 -l 6000

ChangeLog v5 -> v6:
  - Drop prev_clock because a race between a process context and an interrupt
    context (e.g., timer interrupt) is not observable from a caller (discussed
    with Tejun offline), so it is not necessary to check prev_clock.

ChangeLog v4 -> v5:
  - Merge patches 2, 3, and 4 into one for readability.
  - Use a time helper (time_after_eq64) for time comparison at scx_bpf_now_ns().
  - Do not validate the rq clock outside the rq critical section for
    more predictable behavior.
  - Improve the comment at scx_bpf_now_ns() for readability.
  - Rename scx_rq_clock_stale() into  scx_rq_clock_invalidate().
  - Invalidate all the rq clocks upon unloading to prevent getting outdated
    rq clocks from a previous scx scheduler.
  - Use READ/WRITE_ONCE() when accessing rq->scx.{clock, prev_clock, flags}
    to properly handle concurrent accesses from an interrupt contex.
  - Add time helpers for BPF schedulers.
  - Update the rdtsc reduction numbers in the cover letter with the latest
    scx_scheds (1.0.7) and the updated scx_bpf_now_ns().

ChangeLog v3 -> v4:
  - Separate the code relocation related to scx_enabled() into a
    separate patch.
  - Remove scx_rq_clock_stale() after (or before) ops.running() and
    ops.update_idle() calls
  - Rename scx_bpf_clock_get_ns() into scx_bpf_now_ns() and revise it to
    address the comments
  - Move the per-CPU variable holding a prev clock into scx_rq
    (rq->scx.prev_clock)
  - Add a comment describing when the clock could go backward in
    scx_bpf_now_ns()
  - Rebase the code to the tip of Tejun's sched_ext repo (for-next
    branch)

ChangeLog v2 -> v3:
  - To avoid unnecessarily modifying cache lines, scx_rq_clock_update()
    and scx_rq_clock_stale() update the clock and flags only when a
    sched_ext scheduler is enabled.

ChangeLog v1 -> v2:
  - Rename SCX_RQ_CLK_UPDATED to SCX_RQ_CLK_VALID to denote the validity
    of an rq clock clearly.
  - Rearrange the clock and flags fields in struct scx_rq to make sure
    they are in the same cacheline to minimize the cache misses 
  - Add an additional explanation to the commit message in the 2/5 patch
    describing when the rq clock will be reused with an example.
  - Fix typos
  - Rebase the code to the tip of Tejun's sched_ext repo

Changwoo Min (6):
  sched_ext: Relocate scx_enabled() related code
  sched_ext: Implement scx_bpf_now_ns()
  sched_ext: Add scx_bpf_now_ns() for BPF scheduler
  sched_ext: Add time helpers for BPF schedulers
  sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns()
  sched_ext: Use time helpers in BPF schedulers

 kernel/sched/core.c                      |  6 +-
 kernel/sched/ext.c                       | 75 ++++++++++++++++++-
 kernel/sched/sched.h                     | 52 +++++++++----
 tools/sched_ext/include/scx/common.bpf.h | 95 ++++++++++++++++++++++++
 tools/sched_ext/include/scx/compat.bpf.h |  5 ++
 tools/sched_ext/scx_central.bpf.c        |  9 +--
 tools/sched_ext/scx_flatcg.bpf.c         | 13 +---
 tools/sched_ext/scx_simple.bpf.c         |  5 --
 8 files changed, 222 insertions(+), 38 deletions(-)

-- 
2.47.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v6 1/6] sched_ext: Relocate scx_enabled() related code
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
@ 2024-12-20  6:20 ` Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

scx_enabled() will be used in scx_rq_clock_update/invalidate()
in the following patch, so relocate the scx_enabled() related code
to the proper location.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 kernel/sched/sched.h | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 76f5f53a645f..440ecedf871b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1717,6 +1717,19 @@ struct rq_flags {
 
 extern struct balance_callback balance_push_callback;
 
+#ifdef CONFIG_SCHED_CLASS_EXT
+extern const struct sched_class ext_sched_class;
+
+DECLARE_STATIC_KEY_FALSE(__scx_ops_enabled);	/* SCX BPF scheduler loaded */
+DECLARE_STATIC_KEY_FALSE(__scx_switched_all);	/* all fair class tasks on SCX */
+
+#define scx_enabled()		static_branch_unlikely(&__scx_ops_enabled)
+#define scx_switched_all()	static_branch_unlikely(&__scx_switched_all)
+#else /* !CONFIG_SCHED_CLASS_EXT */
+#define scx_enabled()		false
+#define scx_switched_all()	false
+#endif /* !CONFIG_SCHED_CLASS_EXT */
+
 /*
  * Lockdep annotation that avoids accidental unlocks; it's like a
  * sticky/continuous lockdep_assert_held().
@@ -2505,19 +2518,6 @@ extern const struct sched_class rt_sched_class;
 extern const struct sched_class fair_sched_class;
 extern const struct sched_class idle_sched_class;
 
-#ifdef CONFIG_SCHED_CLASS_EXT
-extern const struct sched_class ext_sched_class;
-
-DECLARE_STATIC_KEY_FALSE(__scx_ops_enabled);	/* SCX BPF scheduler loaded */
-DECLARE_STATIC_KEY_FALSE(__scx_switched_all);	/* all fair class tasks on SCX */
-
-#define scx_enabled()		static_branch_unlikely(&__scx_ops_enabled)
-#define scx_switched_all()	static_branch_unlikely(&__scx_switched_all)
-#else /* !CONFIG_SCHED_CLASS_EXT */
-#define scx_enabled()		false
-#define scx_switched_all()	false
-#endif /* !CONFIG_SCHED_CLASS_EXT */
-
 /*
  * Iterate only active classes. SCX can take over all fair tasks or be
  * completely disabled. If the former, skip fair. If the latter, skip SCX.
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns()
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 1/6] sched_ext: Relocate scx_enabled() related code Changwoo Min
@ 2024-12-20  6:20 ` Changwoo Min
  2024-12-20 21:30   ` Andrea Righi
  2024-12-24 21:47   ` Tejun Heo
  2024-12-20  6:20 ` [PATCH v6 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler Changwoo Min
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

Returns a high-performance monotonically non-decreasing clock for the current
CPU. The clock returned is in nanoseconds.

It provides the following properties:

1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently
 to account for execution time and track tasks' runtime properties.
 Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which
 eventually reads a hardware timestamp counter -- is neither performant nor
 scalable. scx_bpf_now_ns() aims to provide a high-performance clock by
 using the rq clock in the scheduler core whenever possible.

2) High enough resolution for the BPF scheduler use cases: In most BPF
 scheduler use cases, the required clock resolution is lower than the most
 accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now_ns() basically
 uses the rq clock in the scheduler core whenever it is valid. It considers
 that the rq clock is valid from the time the rq clock is updated
 (update_rq_clock) until the rq is unlocked (rq_unpin_lock).

3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now_ns()
 guarantees the clock never goes backward when comparing them in the same
 CPU. On the other hand, when comparing clocks in different CPUs, there
 is no such guarantee -- the clock can go backward. It provides a
 monotonically *non-decreasing* clock so that it would provide the same
 clock values in two different scx_bpf_now_ns() calls in the same CPU
 during the same period of when the rq clock is valid.

An rq clock becomes valid when it is updated using update_rq_clock()
and invalidated when the rq is unlocked using rq_unpin_lock().

Let's suppose the following timeline in the scheduler core:

   T1. rq_lock(rq)
   T2. update_rq_clock(rq)
   T3. a sched_ext BPF operation
   T4. rq_unlock(rq)
   T5. a sched_ext BPF operation
   T6. rq_lock(rq)
   T7. update_rq_clock(rq)

For [T2, T4), we consider that rq clock is valid (SCX_RQ_CLK_VALID is
set), so scx_bpf_now_ns() calls during [T2, T4) (including T3) will
return the rq clock updated at T2. For duration [T4, T7), when a BPF
scheduler can still call scx_bpf_now_ns() (T5), we consider the rq clock
is invalid (SCX_RQ_CLK_VALID is unset at T4). So when calling
scx_bpf_now_ns() at T5, we will return a fresh clock value by calling
sched_clock_cpu() internally. Also, to prevent getting outdated rq clocks
from a previous scx scheduler, invalidate all the rq clocks when unloading
a BPF scheduler.

One example of calling scx_bpf_now_ns(), when the rq clock is invalid
(like T5), is in scx_central [1]. The scx_central scheduler uses a BPF
timer for preemptive scheduling. In every msec, the timer callback checks
if the currently running tasks exceed their timeslice. At the beginning of
the BPF timer callback (central_timerfn in scx_central.bpf.c), scx_central
gets the current time. When the BPF timer callback runs, the rq clock could
be invalid, the same as T5. In this case, scx_bpf_now_ns() returns a fresh
clock value rather than returning the old one (T2).

[1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_central.bpf.c

Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 kernel/sched/core.c  |  6 +++-
 kernel/sched/ext.c   | 75 +++++++++++++++++++++++++++++++++++++++++++-
 kernel/sched/sched.h | 26 +++++++++++++--
 3 files changed, 103 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95e40895a519..b0191977d919 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -789,6 +789,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
 void update_rq_clock(struct rq *rq)
 {
 	s64 delta;
+	u64 clock;
 
 	lockdep_assert_rq_held(rq);
 
@@ -800,11 +801,14 @@ void update_rq_clock(struct rq *rq)
 		SCHED_WARN_ON(rq->clock_update_flags & RQCF_UPDATED);
 	rq->clock_update_flags |= RQCF_UPDATED;
 #endif
+	clock = sched_clock_cpu(cpu_of(rq));
+	scx_rq_clock_update(rq, clock, true);
 
-	delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
+	delta = clock - rq->clock;
 	if (delta < 0)
 		return;
 	rq->clock += delta;
+
 	update_rq_clock_task(rq, delta);
 }
 
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index afeed9dfeecb..248b8c91149f 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -4910,7 +4910,7 @@ static void scx_ops_disable_workfn(struct kthread_work *work)
 	struct task_struct *p;
 	struct rhashtable_iter rht_iter;
 	struct scx_dispatch_q *dsq;
-	int i, kind;
+	int i, kind, cpu;
 
 	kind = atomic_read(&scx_exit_kind);
 	while (true) {
@@ -4993,6 +4993,15 @@ static void scx_ops_disable_workfn(struct kthread_work *work)
 	scx_task_iter_stop(&sti);
 	percpu_up_write(&scx_fork_rwsem);
 
+	/*
+	 * Invalidate all the rq clocks to prevent getting outdated
+	 * rq clocks from a previous scx scheduler.
+	 */
+	for_each_possible_cpu(cpu) {
+		struct rq *rq = cpu_rq(cpu);
+		scx_rq_clock_invalidate(rq);
+	}
+
 	/* no task is on scx, turn off all the switches and flush in-progress calls */
 	static_branch_disable(&__scx_ops_enabled);
 	for (i = SCX_OPI_BEGIN; i < SCX_OPI_END; i++)
@@ -7601,6 +7610,69 @@ __bpf_kfunc struct cgroup *scx_bpf_task_cgroup(struct task_struct *p)
 }
 #endif
 
+/**
+ * scx_bpf_now_ns - Returns a high-performance monotonically non-decreasing
+ * clock for the current CPU. The clock returned is in nanoseconds.
+ *
+ * It provides the following properties:
+ *
+ * 1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently
+ *  to account for execution time and track tasks' runtime properties.
+ *  Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which
+ *  eventually reads a hardware timestamp counter -- is neither performant nor
+ *  scalable. scx_bpf_now_ns() aims to provide a high-performance clock by
+ *  using the rq clock in the scheduler core whenever possible.
+ *
+ * 2) High enough resolution for the BPF scheduler use cases: In most BPF
+ *  scheduler use cases, the required clock resolution is lower than the most
+ *  accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now_ns() basically
+ *  uses the rq clock in the scheduler core whenever it is valid. It considers
+ *  that the rq clock is valid from the time the rq clock is updated
+ *  (update_rq_clock) until the rq is unlocked (rq_unpin_lock).
+ *
+ * 3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now_ns()
+ *  guarantees the clock never goes backward when comparing them in the same
+ *  CPU. On the other hand, when comparing clocks in different CPUs, there
+ *  is no such guarantee -- the clock can go backward. It provides a
+ *  monotonically *non-decreasing* clock so that it would provide the same
+ *  clock values in two different scx_bpf_now_ns() calls in the same CPU
+ *  during the same period of when the rq clock is valid.
+ */
+__bpf_kfunc u64 scx_bpf_now_ns(void)
+{
+	struct rq *rq;
+	u64 clock;
+
+	preempt_disable();
+
+	/*
+	 * If the rq clock is valid, use the cached rq clock.
+	 * Otherwise, return a fresh rq glock.
+	 *
+	 * Note that scx_bpf_now_ns() is re-entrant between a process
+	 * context and an interrupt context (e.g., timer interrupt).
+	 * However, we don't need to consider the race between them
+	 * because such race is not observable from a caller.
+	 */
+	rq = this_rq();
+	clock = READ_ONCE(rq->scx.clock);
+
+	if (!(READ_ONCE(rq->scx.flags) & SCX_RQ_CLK_VALID)) {
+		clock = sched_clock_cpu(cpu_of(rq));
+
+		/*
+		 * The rq clock is updated outside of the rq lock.
+		 * In this case, keep the updated rq clock invalid so the next
+		 * kfunc call outside the rq lock gets a fresh rq clock.
+		 */
+		scx_rq_clock_update(rq, clock, false);
+	}
+
+	preempt_enable();
+
+	return clock;
+}
+
 __bpf_kfunc_end_defs();
 
 BTF_KFUNCS_START(scx_kfunc_ids_any)
@@ -7632,6 +7704,7 @@ BTF_ID_FLAGS(func, scx_bpf_cpu_rq)
 #ifdef CONFIG_CGROUP_SCHED
 BTF_ID_FLAGS(func, scx_bpf_task_cgroup, KF_RCU | KF_ACQUIRE)
 #endif
+BTF_ID_FLAGS(func, scx_bpf_now_ns)
 BTF_KFUNCS_END(scx_kfunc_ids_any)
 
 static const struct btf_kfunc_id_set scx_kfunc_set_any = {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 440ecedf871b..53384012e77b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -754,6 +754,7 @@ enum scx_rq_flags {
 	SCX_RQ_BAL_PENDING	= 1 << 2, /* balance hasn't run yet */
 	SCX_RQ_BAL_KEEP		= 1 << 3, /* balance decided to keep current */
 	SCX_RQ_BYPASSING	= 1 << 4,
+	SCX_RQ_CLK_VALID	= 1 << 5, /* RQ clock is fresh and valid */
 
 	SCX_RQ_IN_WAKEUP	= 1 << 16,
 	SCX_RQ_IN_BALANCE	= 1 << 17,
@@ -766,9 +767,10 @@ struct scx_rq {
 	unsigned long		ops_qseq;
 	u64			extra_enq_flags;	/* see move_task_to_local_dsq() */
 	u32			nr_running;
-	u32			flags;
 	u32			cpuperf_target;		/* [0, SCHED_CAPACITY_SCALE] */
 	bool			cpu_released;
+	u32			flags;
+	u64			clock;			/* current per-rq clock -- see scx_bpf_now_ns() */
 	cpumask_var_t		cpus_to_kick;
 	cpumask_var_t		cpus_to_kick_if_idle;
 	cpumask_var_t		cpus_to_preempt;
@@ -1725,9 +1727,29 @@ DECLARE_STATIC_KEY_FALSE(__scx_switched_all);	/* all fair class tasks on SCX */
 
 #define scx_enabled()		static_branch_unlikely(&__scx_ops_enabled)
 #define scx_switched_all()	static_branch_unlikely(&__scx_switched_all)
+
+static inline void scx_rq_clock_update(struct rq *rq, u64 clock, bool valid)
+{
+	if (!scx_enabled())
+		return;
+	WRITE_ONCE(rq->scx.clock, clock);
+	if (valid)
+		WRITE_ONCE(rq->scx.flags, rq->scx.flags | SCX_RQ_CLK_VALID);
+}
+
+static inline void scx_rq_clock_invalidate(struct rq *rq)
+{
+	if (!scx_enabled())
+		return;
+	WRITE_ONCE(rq->scx.flags, rq->scx.flags & ~SCX_RQ_CLK_VALID);
+}
+
 #else /* !CONFIG_SCHED_CLASS_EXT */
 #define scx_enabled()		false
 #define scx_switched_all()	false
+
+static inline void scx_rq_clock_update(struct rq *rq, u64 clock, bool valid) {}
+static inline void scx_rq_clock_invalidate(struct rq *rq) {}
 #endif /* !CONFIG_SCHED_CLASS_EXT */
 
 /*
@@ -1759,7 +1781,7 @@ static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf)
 	if (rq->clock_update_flags > RQCF_ACT_SKIP)
 		rf->clock_update_flags = RQCF_UPDATED;
 #endif
-
+	scx_rq_clock_invalidate(rq);
 	lockdep_unpin_lock(__rq_lockp(rq), rf->cookie);
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 1/6] sched_ext: Relocate scx_enabled() related code Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
@ 2024-12-20  6:20 ` Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

scx_bpf_now_ns() is added to the header files so the BPF scheduler
can use it.

Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 tools/sched_ext/include/scx/common.bpf.h | 1 +
 tools/sched_ext/include/scx/compat.bpf.h | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 858ba1f438f6..79f0798a5350 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -76,6 +76,7 @@ bool scx_bpf_task_running(const struct task_struct *p) __ksym;
 s32 scx_bpf_task_cpu(const struct task_struct *p) __ksym;
 struct rq *scx_bpf_cpu_rq(s32 cpu) __ksym;
 struct cgroup *scx_bpf_task_cgroup(struct task_struct *p) __ksym __weak;
+u64 scx_bpf_now_ns(void) __ksym __weak;
 
 /*
  * Use the following as @it__iter when calling scx_bpf_dsq_move[_vtime]() from
diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h
index d56520100a26..2f16f30ae741 100644
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -125,6 +125,11 @@ bool scx_bpf_dispatch_vtime_from_dsq___compat(struct bpf_iter_scx_dsq *it__iter,
 	false;									\
 })
 
+#define scx_bpf_now_ns()							\
+	(bpf_ksym_exists(scx_bpf_now_ns) ?					\
+	 scx_bpf_now_ns() :							\
+	 bpf_ktime_get_ns())
+
 /*
  * Define sched_ext_ops. This may be expanded to define multiple variants for
  * backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH().
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
                   ` (2 preceding siblings ...)
  2024-12-20  6:20 ` [PATCH v6 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler Changwoo Min
@ 2024-12-20  6:20 ` Changwoo Min
  2024-12-24 21:49   ` Tejun Heo
  2024-12-20  6:20 ` [PATCH v6 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns() Changwoo Min
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

The following functions are added for BPF schedulers:
- vtime_delta(after, before)
- vtime_after(a, b)
- vtime_before(a, b)
- vtime_after_eq(a, b)
- vtime_before_eq(a, b)
- vtime_in_range(a, b, c)
- vtime_in_range_open(a, b, c)

Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 tools/sched_ext/include/scx/common.bpf.h | 94 ++++++++++++++++++++++++
 1 file changed, 94 insertions(+)

diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 79f0798a5350..923bbf57e4f1 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -408,6 +408,100 @@ static __always_inline const struct cpumask *cast_mask(struct bpf_cpumask *mask)
 void bpf_rcu_read_lock(void) __ksym;
 void bpf_rcu_read_unlock(void) __ksym;
 
+/*
+ * Time helpers, most of which are from jiffies.h.
+ */
+
+/**
+ * vtime_delta - Calculate the delta between new and old time stamp
+ * @after: first comparable as u64
+ * @before: second comparable as u64
+ *
+ * Return: the time difference, which is >= 0
+ */
+static inline s64 vtime_delta(u64 after, u64 before)
+{
+	return (s64)(after - before) > 0 ? : 0;
+}
+
+/**
+ * vtime_after - returns true if the time a is after time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Do this with "<0" and ">=0" to only test the sign of the result. A
+ * good compiler would generate better code (and a really good compiler
+ * wouldn't care). Gcc is currently neither.
+ *
+ * Return: %true is time a is after time b, otherwise %false.
+ */
+static inline bool vtime_after(u64 a, u64 b)
+{
+	 return (s64)(b - a) < 0;
+}
+
+/**
+ * vtime_before - returns true if the time a is before time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Return: %true is time a is before time b, otherwise %false.
+ */
+static inline bool vtime_before(u64 a, u64 b)
+{
+	return vtime_after(b, a);
+}
+
+/**
+ * vtime_after_eq - returns true if the time a is after or the same as time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Return: %true is time a is after or the same as time b, otherwise %false.
+ */
+static inline bool vtime_after_eq(u64 a, u64 b)
+{
+	 return (s64)(a - b) >= 0;
+}
+
+/**
+ * vtime_before_eq - returns true if the time a is before or the same as time b.
+ * @a: first comparable as u64
+ * @b: second comparable as u64
+ *
+ * Return: %true is time a is before or the same as time b, otherwise %false.
+ */
+static inline bool vtime_before_eq(u64 a, u64 b)
+{
+	return vtime_after_eq(b, a);
+}
+
+/**
+ * vtime_in_range - Calculate whether a is in the range of [b, c].
+ * @a: time to test
+ * @b: beginning of the range
+ * @c: end of the range
+ *
+ * Return: %true is time a is in the range [b, c], otherwise %false.
+ */
+static inline bool vtime_in_range(u64 a, u64 b, u64 c)
+{
+	return vtime_after_eq(a, b) && vtime_before_eq(a, c);
+}
+
+/**
+ * vtime_in_range_open - Calculate whether a is in the range of [b, c).
+ * @a: time to test
+ * @b: beginning of the range
+ * @c: end of the range
+ *
+ * Return: %true is time a is in the range [b, c), otherwise %false.
+ */
+static inline bool vtime_in_range_open(u64 a, u64 b, u64 c)
+{
+	return vtime_after_eq(a, b) && vtime_before(a, c);
+}
+
 
 /*
  * Other helpers
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns()
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
                   ` (3 preceding siblings ...)
  2024-12-20  6:20 ` [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
@ 2024-12-20  6:20 ` Changwoo Min
  2024-12-20  6:20 ` [PATCH v6 6/6] sched_ext: Use time helpers in BPF schedulers Changwoo Min
  2024-12-20 22:29 ` [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Andrea Righi
  6 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

In the BPF schedulers that use bpf_ktime_get_ns() -- scx_central and
scx_flatcg, replace bpf_ktime_get_ns() calls to scx_bpf_now_ns().

Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 tools/sched_ext/scx_central.bpf.c | 4 ++--
 tools/sched_ext/scx_flatcg.bpf.c  | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index 2907df78241e..ea1d853b9dd4 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -245,7 +245,7 @@ void BPF_STRUCT_OPS(central_running, struct task_struct *p)
 	s32 cpu = scx_bpf_task_cpu(p);
 	u64 *started_at = ARRAY_ELEM_PTR(cpu_started_at, cpu, nr_cpu_ids);
 	if (started_at)
-		*started_at = bpf_ktime_get_ns() ?: 1;	/* 0 indicates idle */
+		*started_at = scx_bpf_now_ns() ?: 1;	/* 0 indicates idle */
 }
 
 void BPF_STRUCT_OPS(central_stopping, struct task_struct *p, bool runnable)
@@ -258,7 +258,7 @@ void BPF_STRUCT_OPS(central_stopping, struct task_struct *p, bool runnable)
 
 static int central_timerfn(void *map, int *key, struct bpf_timer *timer)
 {
-	u64 now = bpf_ktime_get_ns();
+	u64 now = scx_bpf_now_ns();
 	u64 nr_to_kick = nr_queued;
 	s32 i, curr_cpu;
 
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index 3dbfa82883be..85e33e45f818 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -734,7 +734,7 @@ void BPF_STRUCT_OPS(fcg_dispatch, s32 cpu, struct task_struct *prev)
 	struct fcg_cpu_ctx *cpuc;
 	struct fcg_cgrp_ctx *cgc;
 	struct cgroup *cgrp;
-	u64 now = bpf_ktime_get_ns();
+	u64 now = scx_bpf_now_ns();
 	bool picked_next = false;
 
 	cpuc = find_cpu_ctx();
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 6/6] sched_ext: Use time helpers in BPF schedulers
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
                   ` (4 preceding siblings ...)
  2024-12-20  6:20 ` [PATCH v6 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns() Changwoo Min
@ 2024-12-20  6:20 ` Changwoo Min
  2024-12-20 22:29 ` [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Andrea Righi
  6 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-20  6:20 UTC (permalink / raw)
  To: tj, void, arighi, mingo, peterz; +Cc: changwoo, kernel-dev, linux-kernel

Modify the BPF schedulers to use time helpers defined in common.bpf.h

Signed-off-by: Changwoo Min <changwoo@igalia.com>
---
 tools/sched_ext/scx_central.bpf.c |  5 -----
 tools/sched_ext/scx_flatcg.bpf.c  | 11 +++--------
 tools/sched_ext/scx_simple.bpf.c  |  5 -----
 3 files changed, 3 insertions(+), 18 deletions(-)

diff --git a/tools/sched_ext/scx_central.bpf.c b/tools/sched_ext/scx_central.bpf.c
index ea1d853b9dd4..c3b6998ea83e 100644
--- a/tools/sched_ext/scx_central.bpf.c
+++ b/tools/sched_ext/scx_central.bpf.c
@@ -87,11 +87,6 @@ struct {
 	__type(value, struct central_timer);
 } central_timer SEC(".maps");
 
-static bool vtime_before(u64 a, u64 b)
-{
-	return (s64)(a - b) < 0;
-}
-
 s32 BPF_STRUCT_OPS(central_select_cpu, struct task_struct *p,
 		   s32 prev_cpu, u64 wake_flags)
 {
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index 85e33e45f818..2735ec25e511 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -137,11 +137,6 @@ static u64 div_round_up(u64 dividend, u64 divisor)
 	return (dividend + divisor - 1) / divisor;
 }
 
-static bool vtime_before(u64 a, u64 b)
-{
-	return (s64)(a - b) < 0;
-}
-
 static bool cgv_node_less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
 {
 	struct cgv_node *cgc_a, *cgc_b;
@@ -920,14 +915,14 @@ void BPF_STRUCT_OPS(fcg_cgroup_move, struct task_struct *p,
 		    struct cgroup *from, struct cgroup *to)
 {
 	struct fcg_cgrp_ctx *from_cgc, *to_cgc;
-	s64 vtime_delta;
+	s64 delta;
 
 	/* find_cgrp_ctx() triggers scx_ops_error() on lookup failures */
 	if (!(from_cgc = find_cgrp_ctx(from)) || !(to_cgc = find_cgrp_ctx(to)))
 		return;
 
-	vtime_delta = p->scx.dsq_vtime - from_cgc->tvtime_now;
-	p->scx.dsq_vtime = to_cgc->tvtime_now + vtime_delta;
+	delta = vtime_delta(p->scx.dsq_vtime, from_cgc->tvtime_now);
+	p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
 }
 
 s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)
diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
index 31f915b286c6..6561d400ae6d 100644
--- a/tools/sched_ext/scx_simple.bpf.c
+++ b/tools/sched_ext/scx_simple.bpf.c
@@ -52,11 +52,6 @@ static void stat_inc(u32 idx)
 		(*cnt_p)++;
 }
 
-static inline bool vtime_before(u64 a, u64 b)
-{
-	return (s64)(a - b) < 0;
-}
-
 s32 BPF_STRUCT_OPS(simple_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
 {
 	bool is_idle = false;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns()
  2024-12-20  6:20 ` [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
@ 2024-12-20 21:30   ` Andrea Righi
  2024-12-22  4:32     ` Changwoo Min
  2024-12-24 21:47   ` Tejun Heo
  1 sibling, 1 reply; 15+ messages in thread
From: Andrea Righi @ 2024-12-20 21:30 UTC (permalink / raw)
  To: Changwoo Min; +Cc: tj, void, mingo, peterz, changwoo, kernel-dev, linux-kernel

Hi Changwoo,

On Fri, Dec 20, 2024 at 03:20:21PM +0900, Changwoo Min wrote:
...
> +/**
> + * scx_bpf_now_ns - Returns a high-performance monotonically non-decreasing
> + * clock for the current CPU. The clock returned is in nanoseconds.
> + *
> + * It provides the following properties:
> + *
> + * 1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently
> + *  to account for execution time and track tasks' runtime properties.
> + *  Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which
> + *  eventually reads a hardware timestamp counter -- is neither performant nor
> + *  scalable. scx_bpf_now_ns() aims to provide a high-performance clock by
> + *  using the rq clock in the scheduler core whenever possible.
> + *
> + * 2) High enough resolution for the BPF scheduler use cases: In most BPF
> + *  scheduler use cases, the required clock resolution is lower than the most
> + *  accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now_ns() basically
> + *  uses the rq clock in the scheduler core whenever it is valid. It considers
> + *  that the rq clock is valid from the time the rq clock is updated
> + *  (update_rq_clock) until the rq is unlocked (rq_unpin_lock).
> + *
> + * 3) Monotonically wq	X`on-decreasing clock for the same CPU: scx_bpf_now_ns()
> + *  guarantees the clock never goes backward when comparing them in the same
> + *  CPU. On the other hand, when comparing clocks in different CPUs, there
> + *  is no such guarantee -- the clock can go backward. It provides a
> + *  monotonically *non-decreasing* clock so that it would provide the same
> + *  clock values in two different scx_bpf_now_ns() calls in the same CPU
> + *  during the same period of when the rq clock is valid.
> + */
> +__bpf_kfunc u64 scx_bpf_now_ns(void)
> +{
> +	struct rq *rq;
> +	u64 clock;
> +
> +	preempt_disable();
> +
> +	/*
> +	 * If the rq clock is valid, use the cached rq clock.
> +	 * Otherwise, return a fresh rq glock.

s/glock/clock/

> +	 *
> +	 * Note that scx_bpf_now_ns() is re-entrant between a process
> +	 * context and an interrupt context (e.g., timer interrupt).
> +	 * However, we don't need to consider the race between them
> +	 * because such race is not observable from a caller.
> +	 */
> +	rq = this_rq();
> +	clock = READ_ONCE(rq->scx.clock);
> +
> +	if (!(READ_ONCE(rq->scx.flags) & SCX_RQ_CLK_VALID)) {
> +		clock = sched_clock_cpu(cpu_of(rq));
> +
> +		/*
> +		 * The rq clock is updated outside of the rq lock.
> +		 * In this case, keep the updated rq clock invalid so the next
> +		 * kfunc call outside the rq lock gets a fresh rq clock.
> +		 */
> +		scx_rq_clock_update(rq, clock, false);
> +	}

I was wondering if we could use a special value for clock (like ~0ULL or
similar) to mark the clock as invalid.

This way, we could get rid of the extra READ_ONCE(rq->scx.flags) logic for
checking the clock validity. And if the actual clock happens to match the
special value, we'd simply re-read the TSC, which shouldn't be a big issue
in theory.

That said, I'm not sure if this would yield any real performance benefits,
so the current approach is probably fine as it is, therefore feel free to
ignore this.

-Andrea

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock
  2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
                   ` (5 preceding siblings ...)
  2024-12-20  6:20 ` [PATCH v6 6/6] sched_ext: Use time helpers in BPF schedulers Changwoo Min
@ 2024-12-20 22:29 ` Andrea Righi
  2024-12-22  6:37   ` Changwoo Min
  6 siblings, 1 reply; 15+ messages in thread
From: Andrea Righi @ 2024-12-20 22:29 UTC (permalink / raw)
  To: Changwoo Min; +Cc: tj, void, mingo, peterz, changwoo, kernel-dev, linux-kernel

Hi Changwoo,

On Fri, Dec 20, 2024 at 03:20:19PM +0900, Changwoo Min wrote:
> Many BPF schedulers (such as scx_central, scx_lavd, scx_rusty, scx_bpfland,
> and scx_flash) frequently call bpf_ktime_get_ns() for tracking tasks' runtime
> properties. If supported, bpf_ktime_get_ns() eventually reads a hardware
> timestamp counter (TSC). However, reading a hardware TSC is not
> performant in some hardware platforms, degrading IPC.
> 
> This patchset addresses the performance problem of reading hardware TSC
> by leveraging the rq clock in the scheduler core, introducing a
> scx_bpf_now_ns() function for BPF schedulers. Whenever the rq clock
> is fresh and valid, scx_bpf_now_ns() provides the rq clock, which is
> already updated by the scheduler core (update_rq_clock), so it can reduce
> reading the hardware TSC.
> 
> When the rq lock is released (rq_unpin_lock), the rq clock is invalidated,
> so a subsequent scx_bpf_now_ns() call gets the fresh sched_clock for the caller.
> 
> In addition, scx_bpf_now_ns() guarantees the clock is monotonically
> non-decreasing for the same CPU, so the clock cannot go backward
> in the same CPU.
> 
> Using scx_bpf_now_ns() reduces the number of reading hardware TSC
> by 50-80% (76% for scx_lavd, 82% for scx_bpfland, and 51% for scx_rusty)
> for the following benchmark:
> 
>     perf bench -f simple sched messaging -t -g 20 -l 6000

I've tested this patch set and I haven't observed any significant
performance improvements (but also no regressions), even if the systems
I've tested are likely quite efficient at reading the hardware TSC.

I'm curious if we'd see a more significant difference in non-hardware
virtualized systems (i.e., qemu without kvm). Have you done any testing in
such environments already?

In any case:

Tested-by: Andrea Righi <arighi@nvidia.com>

-Andrea

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns()
  2024-12-20 21:30   ` Andrea Righi
@ 2024-12-22  4:32     ` Changwoo Min
  0 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-22  4:32 UTC (permalink / raw)
  To: Andrea Righi; +Cc: tj, void, mingo, peterz, kernel-dev, linux-kernel

Hi Andrea,

On 24. 12. 21. 06:30, Andrea Righi wrote:
> Hi Changwoo,
> 
> On Fri, Dec 20, 2024 at 03:20:21PM +0900, Changwoo Min wrote:
> ...
>> +	/*
>> +	 * If the rq clock is valid, use the cached rq clock.
>> +	 * Otherwise, return a fresh rq glock.
> 
> s/glock/clock/
Opps. My bad.

>> +	if (!(READ_ONCE(rq->scx.flags) & SCX_RQ_CLK_VALID)) {
>> +		clock = sched_clock_cpu(cpu_of(rq));
>> +
>> +		/*
>> +		 * The rq clock is updated outside of the rq lock.
>> +		 * In this case, keep the updated rq clock invalid so the next
>> +		 * kfunc call outside the rq lock gets a fresh rq clock.
>> +		 */
>> +		scx_rq_clock_update(rq, clock, false);
>> +	}
> 
> I was wondering if we could use a special value for clock (like ~0ULL or
> similar) to mark the clock as invalid.
> 
> This way, we could get rid of the extra READ_ONCE(rq->scx.flags) logic for
> checking the clock validity. And if the actual clock happens to match the
> special value, we'd simply re-read the TSC, which shouldn't be a big issue
> in theory.


Thank you for the suggestion. In theory, the clock can overflow,
so it would be hard to reserve a specific value. I think it would
be better to keep the code as it is.

Regards,
Changwoo Min

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock
  2024-12-20 22:29 ` [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Andrea Righi
@ 2024-12-22  6:37   ` Changwoo Min
  0 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-22  6:37 UTC (permalink / raw)
  To: Andrea Righi; +Cc: tj, void, mingo, peterz, kernel-dev, linux-kernel

Hi Andrea,

On 24. 12. 21. 07:29, Andrea Righi wrote:
> I've tested this patch set and I haven't observed any significant
> performance improvements (but also no regressions), even if the systems
> I've tested are likely quite efficient at reading the hardware TSC.

Thank you for the testing. I am glad to hear that there is no
performance regression on an TSC-efficient system.

> I'm curious if we'd see a more significant difference in non-hardware
> virtualized systems (i.e., qemu without kvm). Have you done any testing in
> such environments already?

Well, I guess the gain is mostly relevant to how inefficient TSC
is in the system and how much a workload stresses TSC.

Regards,
Changwoo Min

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns()
  2024-12-20  6:20 ` [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
  2024-12-20 21:30   ` Andrea Righi
@ 2024-12-24 21:47   ` Tejun Heo
  2024-12-27  0:38     ` Changwoo Min
  1 sibling, 1 reply; 15+ messages in thread
From: Tejun Heo @ 2024-12-24 21:47 UTC (permalink / raw)
  To: Changwoo Min
  Cc: void, arighi, mingo, peterz, changwoo, kernel-dev, linux-kernel

Hello,

On Fri, Dec 20, 2024 at 03:20:21PM +0900, Changwoo Min wrote:
...
> +__bpf_kfunc u64 scx_bpf_now_ns(void)

Given that the default time unit is ns for the scheduler, the _ns suffix
can probably go.

> +{
> +	struct rq *rq;
> +	u64 clock;
> +
> +	preempt_disable();
> +
> +	/*
> +	 * If the rq clock is valid, use the cached rq clock.
> +	 * Otherwise, return a fresh rq glock.
> +	 *
> +	 * Note that scx_bpf_now_ns() is re-entrant between a process
> +	 * context and an interrupt context (e.g., timer interrupt).
> +	 * However, we don't need to consider the race between them
> +	 * because such race is not observable from a caller.
> +	 */
> +	rq = this_rq();
> +	clock = READ_ONCE(rq->scx.clock);
> +
> +	if (!(READ_ONCE(rq->scx.flags) & SCX_RQ_CLK_VALID)) {
> +		clock = sched_clock_cpu(cpu_of(rq));
> +
> +		/*
> +		 * The rq clock is updated outside of the rq lock.
> +		 * In this case, keep the updated rq clock invalid so the next
> +		 * kfunc call outside the rq lock gets a fresh rq clock.
> +		 */
> +		scx_rq_clock_update(rq, clock, false);

Hmm... what does this update do?

...
> +static inline void scx_rq_clock_update(struct rq *rq, u64 clock, bool valid)
> +{
> +	if (!scx_enabled())
> +		return;
> +	WRITE_ONCE(rq->scx.clock, clock);
> +	if (valid)
> +		WRITE_ONCE(rq->scx.flags, rq->scx.flags | SCX_RQ_CLK_VALID);
> +}

Isn't rq->scx.clock used iff VALID is set? If so, why does !VALID read need
to update rq->scx.clock?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers
  2024-12-20  6:20 ` [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
@ 2024-12-24 21:49   ` Tejun Heo
  2024-12-27  0:32     ` Changwoo Min
  0 siblings, 1 reply; 15+ messages in thread
From: Tejun Heo @ 2024-12-24 21:49 UTC (permalink / raw)
  To: Changwoo Min
  Cc: void, arighi, mingo, peterz, changwoo, kernel-dev, linux-kernel

On Fri, Dec 20, 2024 at 03:20:23PM +0900, Changwoo Min wrote:
> The following functions are added for BPF schedulers:
> - vtime_delta(after, before)
> - vtime_after(a, b)
> - vtime_before(a, b)
> - vtime_after_eq(a, b)
> - vtime_before_eq(a, b)
> - vtime_in_range(a, b, c)
> - vtime_in_range_open(a, b, c)

Let's prefix them with time_ instead of vtime_ as they will be used for both
real and virtual times.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers
  2024-12-24 21:49   ` Tejun Heo
@ 2024-12-27  0:32     ` Changwoo Min
  0 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-27  0:32 UTC (permalink / raw)
  To: Tejun Heo; +Cc: void, arighi, mingo, peterz, changwoo, kernel-dev, linux-kernel

Hello,

On 24. 12. 25. 06:49, Tejun Heo wrote:
> On Fri, Dec 20, 2024 at 03:20:23PM +0900, Changwoo Min wrote:
>> The following functions are added for BPF schedulers:
>> - vtime_delta(after, before)
>> - vtime_after(a, b)
>> - vtime_before(a, b)
>> - vtime_after_eq(a, b)
>> - vtime_before_eq(a, b)
>> - vtime_in_range(a, b, c)
>> - vtime_in_range_open(a, b, c)
> 
> Let's prefix them with time_ instead of vtime_ as they will be used for both
> real and virtual times.

Sure. I will change the prefix in the next version.

Regards,
Changwoo Min

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns()
  2024-12-24 21:47   ` Tejun Heo
@ 2024-12-27  0:38     ` Changwoo Min
  0 siblings, 0 replies; 15+ messages in thread
From: Changwoo Min @ 2024-12-27  0:38 UTC (permalink / raw)
  To: Tejun Heo, Changwoo Min
  Cc: void, arighi, mingo, peterz, kernel-dev, linux-kernel

Hello,

On 24. 12. 25. 06:47, Tejun Heo wrote:
> Hello,
> 
> On Fri, Dec 20, 2024 at 03:20:21PM +0900, Changwoo Min wrote:
> ...
>> +__bpf_kfunc u64 scx_bpf_now_ns(void)
> 
> Given that the default time unit is ns for the scheduler, the _ns suffix
> can probably go.

Ok. I will change is as suggested.

> 

>> +	if (!(READ_ONCE(rq->scx.flags) & SCX_RQ_CLK_VALID)) {
>> +		clock = sched_clock_cpu(cpu_of(rq));
>> +
>> +		/*
>> +		 * The rq clock is updated outside of the rq lock.
>> +		 * In this case, keep the updated rq clock invalid so the next
>> +		 * kfunc call outside the rq lock gets a fresh rq clock.
>> +		 */
>> +		scx_rq_clock_update(rq, clock, false);
> 
> Hmm... what does this update do?
It can be dropped as we do not track prev_clock.


> ...
>> +static inline void scx_rq_clock_update(struct rq *rq, u64 clock, bool valid)
>> +{
>> +	if (!scx_enabled())
>> +		return;
>> +	WRITE_ONCE(rq->scx.clock, clock);
>> +	if (valid)
>> +		WRITE_ONCE(rq->scx.flags, rq->scx.flags | SCX_RQ_CLK_VALID);
>> +}
> 
> Isn't rq->scx.clock used iff VALID is set? If so, why does !VALID read need
> to update rq->scx.clock?

If we drop the previous scx_rq_clock_update(.., false),  we can
drop the if condition checking the valid flag.

Regards,
Changwoo Min

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-12-27  0:39 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-20  6:20 [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Changwoo Min
2024-12-20  6:20 ` [PATCH v6 1/6] sched_ext: Relocate scx_enabled() related code Changwoo Min
2024-12-20  6:20 ` [PATCH v6 2/6] sched_ext: Implement scx_bpf_now_ns() Changwoo Min
2024-12-20 21:30   ` Andrea Righi
2024-12-22  4:32     ` Changwoo Min
2024-12-24 21:47   ` Tejun Heo
2024-12-27  0:38     ` Changwoo Min
2024-12-20  6:20 ` [PATCH v6 3/6] sched_ext: Add scx_bpf_now_ns() for BPF scheduler Changwoo Min
2024-12-20  6:20 ` [PATCH v6 4/6] sched_ext: Add time helpers for BPF schedulers Changwoo Min
2024-12-24 21:49   ` Tejun Heo
2024-12-27  0:32     ` Changwoo Min
2024-12-20  6:20 ` [PATCH v6 5/6] sched_ext: Replace bpf_ktime_get_ns() to scx_bpf_now_ns() Changwoo Min
2024-12-20  6:20 ` [PATCH v6 6/6] sched_ext: Use time helpers in BPF schedulers Changwoo Min
2024-12-20 22:29 ` [PATCH v6 0/6] sched_ext: Support high-performance monotonically non-decreasing clock Andrea Righi
2024-12-22  6:37   ` Changwoo Min

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).