* [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates
@ 2024-08-04 2:40 Tejun Heo
2024-08-04 2:40 ` [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick() Tejun Heo
` (7 more replies)
0 siblings, 8 replies; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo
Misc updates mostly implementing Peter's feedbacks from the following
thread:
http://lkml.kernel.org/r/20240723163358.GM26750@noisy.programming.kicks-ass.net
This patchset contains the following patches:
0001-sched_ext-Simplify-scx_can_stop_tick-invocation-in-s.patch
0002-sched_ext-Add-scx_enabled-test-to-start_class-promot.patch
0003-sched_ext-Use-update_curr_common-in-update_curr_scx.patch
0004-sched_ext-Simplify-UP-support-by-enabling-sched_clas.patch
0005-sched_ext-Improve-comment-on-idle_sched_class-except.patch
0006-sched_ext-Make-task_can_run_on_remote_rq-use-common-.patch
and is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git scx-misc-updates
kernel/sched/core.c | 14 ++++------
kernel/sched/ext.c | 101 +++++++++++++++++++++++++++++++------------------------------------------
kernel/sched/sched.h | 20 +++++++++++++-
3 files changed, 69 insertions(+), 66 deletions(-)
--
tejun
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick()
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
@ 2024-08-04 2:40 ` Tejun Heo
2024-08-05 17:55 ` David Vernet
2024-08-04 2:40 ` [PATCH 2/6] sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance() Tejun Heo
` (6 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo, Tejun Heo
The way sched_can_stop_tick() used scx_can_stop_tick() was rather confusing
and the behavior wasn't ideal when SCX is enabled in partial mode. Simplify
it so that:
- scx_can_stop_tick() can say no if scx_enabled().
- CFS tests rq->cfs.nr_running > 1 instead of rq->nr_running.
This is easier to follow and leads to the correct answer whether SCX is
disabled, enabled in partial mode or all tasks are switched to SCX.
Peter, note that this is a bit different from your suggestion where
sched_can_stop_tick() unconditionally returns scx_can_stop_tick() iff
scx_switched_all(). The problem is that in partial mode, tick can be stopped
when there is only one SCX task even if the BPF scheduler didn't ask and
isn't ready for it.
Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 22f86d5e9231..7994118eee53 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1267,10 +1267,10 @@ bool sched_can_stop_tick(struct rq *rq)
* left. For CFS, if there's more than one we need the tick for
* involuntary preemption. For SCX, ask.
*/
- if (!scx_switched_all() && rq->nr_running > 1)
+ if (scx_enabled() && !scx_can_stop_tick(rq))
return false;
- if (scx_enabled() && !scx_can_stop_tick(rq))
+ if (rq->cfs.nr_running > 1)
return false;
/*
--
2.46.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 2/6] sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance()
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
2024-08-04 2:40 ` [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick() Tejun Heo
@ 2024-08-04 2:40 ` Tejun Heo
2024-08-05 17:57 ` David Vernet
2024-08-04 2:40 ` [PATCH 3/6] sched_ext: Use update_curr_common() in update_curr_scx() Tejun Heo
` (5 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo, Tejun Heo
SCX needs its balance() invoked even when waking up from a lower priority
sched class (idle) and put_prev_task_balance() thus has the logic to promote
@start_class if it's lower than ext_sched_class. This is only needed when
SCX is enabled. Add scx_enabled() test to avoid unnecessary overhead when
SCX is disabled.
Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7994118eee53..0532b27fd9af 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5836,7 +5836,7 @@ static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
* when waking up from SCHED_IDLE. If @start_class is below SCX, start
* from SCX instead.
*/
- if (sched_class_above(&ext_sched_class, start_class))
+ if (scx_enabled() && sched_class_above(&ext_sched_class, start_class))
start_class = &ext_sched_class;
#endif
--
2.46.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 3/6] sched_ext: Use update_curr_common() in update_curr_scx()
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
2024-08-04 2:40 ` [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick() Tejun Heo
2024-08-04 2:40 ` [PATCH 2/6] sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance() Tejun Heo
@ 2024-08-04 2:40 ` Tejun Heo
2024-08-05 18:23 ` David Vernet
2024-08-04 2:40 ` [PATCH 4/6] sched_ext: Simplify UP support by enabling sched_class->balance() in UP Tejun Heo
` (4 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo, Tejun Heo
update_curr_scx() is open coding runtime updates. Use update_curr_common()
instead and avoid unnecessary deviations.
Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/ext.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 938830121a32..48f8f57f5954 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1466,20 +1466,14 @@ static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
static void update_curr_scx(struct rq *rq)
{
struct task_struct *curr = rq->curr;
- u64 now = rq_clock_task(rq);
- u64 delta_exec;
+ s64 delta_exec;
- if (time_before_eq64(now, curr->se.exec_start))
+ delta_exec = update_curr_common(rq);
+ if (unlikely(delta_exec <= 0))
return;
- delta_exec = now - curr->se.exec_start;
- curr->se.exec_start = now;
- curr->se.sum_exec_runtime += delta_exec;
- account_group_exec_runtime(curr, delta_exec);
- cgroup_account_cputime(curr, delta_exec);
-
if (curr->scx.slice != SCX_SLICE_INF) {
- curr->scx.slice -= min(curr->scx.slice, delta_exec);
+ curr->scx.slice -= min_t(u64, curr->scx.slice, delta_exec);
if (!curr->scx.slice)
touch_core_sched(rq, curr);
}
--
2.46.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 4/6] sched_ext: Simplify UP support by enabling sched_class->balance() in UP
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
` (2 preceding siblings ...)
2024-08-04 2:40 ` [PATCH 3/6] sched_ext: Use update_curr_common() in update_curr_scx() Tejun Heo
@ 2024-08-04 2:40 ` Tejun Heo
2024-08-05 19:49 ` David Vernet
2024-08-04 2:40 ` [PATCH 5/6] sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked() Tejun Heo
` (3 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo, Tejun Heo
On SMP, SCX performs dispatch from sched_class->balance(). As balance() was
not available in UP, it instead called the internal balance function from
put_prev_task_scx() and pick_next_task_scx() to emulate the effect, which is
rather nasty.
Enabling sched_class->balance() on UP shouldn't cause any meaningful
overhead. Enable balance() on UP and drop the ugly workaround.
Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/core.c | 4 +---
kernel/sched/ext.c | 41 +----------------------------------------
kernel/sched/sched.h | 2 +-
3 files changed, 3 insertions(+), 44 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0532b27fd9af..d2ccc2c4b4d3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5826,7 +5826,6 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
struct rq_flags *rf)
{
-#ifdef CONFIG_SMP
const struct sched_class *start_class = prev->sched_class;
const struct sched_class *class;
@@ -5849,10 +5848,9 @@ static void put_prev_task_balance(struct rq *rq, struct task_struct *prev,
* a runnable task of @class priority or higher.
*/
for_active_class_range(class, start_class, &idle_sched_class) {
- if (class->balance(rq, prev, rf))
+ if (class->balance && class->balance(rq, prev, rf))
break;
}
-#endif
put_prev_task(rq, prev);
}
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 48f8f57f5954..09f394bb4889 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2616,7 +2616,6 @@ static int balance_one(struct rq *rq, struct task_struct *prev, bool local)
return has_tasks;
}
-#ifdef CONFIG_SMP
static int balance_scx(struct rq *rq, struct task_struct *prev,
struct rq_flags *rf)
{
@@ -2650,7 +2649,6 @@ static int balance_scx(struct rq *rq, struct task_struct *prev,
return ret;
}
-#endif
static void set_next_task_scx(struct rq *rq, struct task_struct *p, bool first)
{
@@ -2719,37 +2717,6 @@ static void process_ddsp_deferred_locals(struct rq *rq)
static void put_prev_task_scx(struct rq *rq, struct task_struct *p)
{
-#ifndef CONFIG_SMP
- /*
- * UP workaround.
- *
- * Because SCX may transfer tasks across CPUs during dispatch, dispatch
- * is performed from its balance operation which isn't called in UP.
- * Let's work around by calling it from the operations which come right
- * after.
- *
- * 1. If the prev task is on SCX, pick_next_task() calls
- * .put_prev_task() right after. As .put_prev_task() is also called
- * from other places, we need to distinguish the calls which can be
- * done by looking at the previous task's state - if still queued or
- * dequeued with %SCX_DEQ_SLEEP, the caller must be pick_next_task().
- * This case is handled here.
- *
- * 2. If the prev task is not on SCX, the first following call into SCX
- * will be .pick_next_task(), which is covered by calling
- * balance_scx() from pick_next_task_scx().
- *
- * Note that we can't merge the first case into the second as
- * balance_scx() must be called before the previous SCX task goes
- * through put_prev_task_scx().
- *
- * @rq is pinned and can't be unlocked. As UP doesn't transfer tasks
- * around, balance_one() doesn't need to.
- */
- if (p->scx.flags & (SCX_TASK_QUEUED | SCX_TASK_DEQD_FOR_SLEEP))
- balance_one(rq, p, true);
-#endif
-
update_curr_scx(rq);
/* see dequeue_task_scx() on why we skip when !QUEUED */
@@ -2807,12 +2774,6 @@ static struct task_struct *pick_next_task_scx(struct rq *rq)
{
struct task_struct *p;
-#ifndef CONFIG_SMP
- /* UP workaround - see the comment at the head of put_prev_task_scx() */
- if (unlikely(rq->curr->sched_class != &ext_sched_class))
- balance_one(rq, rq->curr, true);
-#endif
-
p = first_local_task(rq);
if (!p)
return NULL;
@@ -3673,6 +3634,7 @@ DEFINE_SCHED_CLASS(ext) = {
.wakeup_preempt = wakeup_preempt_scx,
+ .balance = balance_scx,
.pick_next_task = pick_next_task_scx,
.put_prev_task = put_prev_task_scx,
@@ -3681,7 +3643,6 @@ DEFINE_SCHED_CLASS(ext) = {
.switch_class = switch_class_scx,
#ifdef CONFIG_SMP
- .balance = balance_scx,
.select_task_rq = select_task_rq_scx,
.task_woken = task_woken_scx,
.set_cpus_allowed = set_cpus_allowed_scx,
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 42b4d1428c2c..9b88a46d3fce 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2357,6 +2357,7 @@ struct sched_class {
void (*wakeup_preempt)(struct rq *rq, struct task_struct *p, int flags);
+ int (*balance)(struct rq *rq, struct task_struct *prev, struct rq_flags *rf);
struct task_struct *(*pick_next_task)(struct rq *rq);
void (*put_prev_task)(struct rq *rq, struct task_struct *p);
@@ -2365,7 +2366,6 @@ struct sched_class {
void (*switch_class)(struct rq *rq, struct task_struct *next);
#ifdef CONFIG_SMP
- int (*balance)(struct rq *rq, struct task_struct *prev, struct rq_flags *rf);
int (*select_task_rq)(struct task_struct *p, int task_cpu, int flags);
struct task_struct * (*pick_task)(struct rq *rq);
--
2.46.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 5/6] sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked()
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
` (3 preceding siblings ...)
2024-08-04 2:40 ` [PATCH 4/6] sched_ext: Simplify UP support by enabling sched_class->balance() in UP Tejun Heo
@ 2024-08-04 2:40 ` Tejun Heo
2024-08-05 19:50 ` David Vernet
2024-08-04 2:40 ` [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu() Tejun Heo
` (2 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo, Tejun Heo
scx_task_iter_next_locked() skips tasks whose sched_class is
idle_sched_class. While it has a short comment explaining why it's testing
the sched_class directly isntead of using is_idle_task(), the comment
doesn't sufficiently explain what's going on and why. Improve the comment.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/ext.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 09f394bb4889..7837a551022c 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1252,8 +1252,29 @@ scx_task_iter_next_locked(struct scx_task_iter *iter, bool include_dead)
while ((p = scx_task_iter_next(iter))) {
/*
- * is_idle_task() tests %PF_IDLE which may not be set for CPUs
- * which haven't yet been onlined. Test sched_class directly.
+ * scx_task_iter is used to prepare and move tasks into SCX
+ * while loading the BPF scheduler and vice-versa while
+ * unloading. The init_tasks ("swappers") should be excluded
+ * from the iteration because:
+ *
+ * - It's unsafe to use __setschduler_prio() on an init_task to
+ * determine the sched_class to use as it won't preserve its
+ * idle_sched_class.
+ *
+ * - ops.init/exit_task() can easily be confused if called with
+ * init_tasks as they, e.g., share PID 0.
+ *
+ * As init_tasks are never scheduled through SCX, they can be
+ * skipped safely. Note that is_idle_task() which tests %PF_IDLE
+ * doesn't work here:
+ *
+ * - %PF_IDLE may not be set for an init_task whose CPU hasn't
+ * yet been onlined.
+ *
+ * - %PF_IDLE can be set on tasks that are not init_tasks. See
+ * play_idle_precise() used by CONFIG_IDLE_INJECT.
+ *
+ * Test for idle_sched_class as only init_tasks are on it.
*/
if (p->sched_class != &idle_sched_class)
break;
--
2.46.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu()
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
` (4 preceding siblings ...)
2024-08-04 2:40 ` [PATCH 5/6] sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked() Tejun Heo
@ 2024-08-04 2:40 ` Tejun Heo
2024-08-05 19:55 ` David Vernet
` (2 more replies)
2024-08-06 8:13 ` [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Peter Zijlstra
2024-08-06 19:39 ` Tejun Heo
7 siblings, 3 replies; 18+ messages in thread
From: Tejun Heo @ 2024-08-04 2:40 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo, Tejun Heo
task_can_run_on_remote_rq() is similar to is_cpu_allowed() but there are
subtle differences. It currently open codes all the tests. This is
cumbersome to understand and error-prone in case the intersecting tests need
to be updated.
Factor out the common part - testing whether the task is allowed on the CPU
at all regardless of the CPU state - into task_allowed_on_cpu() and make
both is_cpu_allowed() and SCX's task_can_run_on_remote_rq() use it. As the
code is now linked between the two and each contains only the extra tests
that differ between them, it's less error-prone when the conditions need to
be updated. Also, improve the comment to explain why they are different.
Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/core.c | 4 ++--
kernel/sched/ext.c | 21 ++++++++++++++++-----
kernel/sched/sched.h | 18 ++++++++++++++++++
3 files changed, 36 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d2ccc2c4b4d3..3c22d0c8eed1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2311,7 +2311,7 @@ static inline bool rq_has_pinned_tasks(struct rq *rq)
static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
{
/* When not in the task's cpumask, no point in looking further. */
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ if (!task_allowed_on_cpu(p, cpu))
return false;
/* migrate_disabled() must be allowed to finish. */
@@ -2320,7 +2320,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
/* Non kernel threads are not allowed during either online or offline. */
if (!(p->flags & PF_KTHREAD))
- return cpu_active(cpu) && task_cpu_possible(cpu, p);
+ return cpu_active(cpu);
/* KTHREAD_IS_PER_CPU is always allowed. */
if (kthread_is_per_cpu(p))
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 7837a551022c..60a7eb7d8a9e 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2224,19 +2224,30 @@ static void consume_local_task(struct rq *rq, struct scx_dispatch_q *dsq,
#ifdef CONFIG_SMP
/*
- * Similar to kernel/sched/core.c::is_cpu_allowed() but we're testing whether @p
- * can be pulled to @rq.
+ * Similar to kernel/sched/core.c::is_cpu_allowed(). However, there are two
+ * differences:
+ *
+ * - is_cpu_allowed() asks "Can this task run on this CPU?" while
+ * task_can_run_on_remote_rq() asks "Can the BPF scheduler migrate the task to
+ * this CPU?".
+ *
+ * While migration is disabled, is_cpu_allowed() has to say "yes" as the task
+ * must be allowed to finish on the CPU that it's currently on regardless of
+ * the CPU state. However, task_can_run_on_remote_rq() must say "no" as the
+ * BPF scheduler shouldn't attempt to migrate a task which has migration
+ * disabled.
+ *
+ * - The BPF scheduler is bypassed while the rq is offline and we can always say
+ * no to the BPF scheduler initiated migrations while offline.
*/
static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq)
{
int cpu = cpu_of(rq);
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ if (!task_allowed_on_cpu(p, cpu))
return false;
if (unlikely(is_migration_disabled(p)))
return false;
- if (!(p->flags & PF_KTHREAD) && unlikely(!task_cpu_possible(cpu, p)))
- return false;
if (!scx_rq_online(rq))
return false;
return true;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9b88a46d3fce..2b369d8a36b1 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2530,6 +2530,19 @@ extern void sched_balance_trigger(struct rq *rq);
extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);
+extern inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
+{
+ /* When not in the task's cpumask, no point in looking further. */
+ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ return false;
+
+ /* Can @cpu run a user thread? */
+ if (!(p->flags & PF_KTHREAD) && !task_cpu_possible(cpu, p))
+ return false;
+
+ return true;
+}
+
static inline cpumask_t *alloc_user_cpus_ptr(int node)
{
/*
@@ -2563,6 +2576,11 @@ extern int push_cpu_stop(void *arg);
#else /* !CONFIG_SMP: */
+static inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
+{
+ return true;
+}
+
static inline int __set_cpus_allowed_ptr(struct task_struct *p,
struct affinity_context *ctx)
{
--
2.46.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick()
2024-08-04 2:40 ` [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick() Tejun Heo
@ 2024-08-05 17:55 ` David Vernet
0 siblings, 0 replies; 18+ messages in thread
From: David Vernet @ 2024-08-05 17:55 UTC (permalink / raw)
To: Tejun Heo; +Cc: peterz, linux-kernel, kernel-team, mingo
[-- Attachment #1: Type: text/plain, Size: 987 bytes --]
On Sat, Aug 03, 2024 at 04:40:08PM -1000, Tejun Heo wrote:
> The way sched_can_stop_tick() used scx_can_stop_tick() was rather confusing
> and the behavior wasn't ideal when SCX is enabled in partial mode. Simplify
> it so that:
>
> - scx_can_stop_tick() can say no if scx_enabled().
>
> - CFS tests rq->cfs.nr_running > 1 instead of rq->nr_running.
>
> This is easier to follow and leads to the correct answer whether SCX is
> disabled, enabled in partial mode or all tasks are switched to SCX.
>
> Peter, note that this is a bit different from your suggestion where
> sched_can_stop_tick() unconditionally returns scx_can_stop_tick() iff
> scx_switched_all(). The problem is that in partial mode, tick can be stopped
> when there is only one SCX task even if the BPF scheduler didn't ask and
> isn't ready for it.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 2/6] sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance()
2024-08-04 2:40 ` [PATCH 2/6] sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance() Tejun Heo
@ 2024-08-05 17:57 ` David Vernet
0 siblings, 0 replies; 18+ messages in thread
From: David Vernet @ 2024-08-05 17:57 UTC (permalink / raw)
To: Tejun Heo; +Cc: peterz, linux-kernel, kernel-team, mingo
[-- Attachment #1: Type: text/plain, Size: 541 bytes --]
On Sat, Aug 03, 2024 at 04:40:09PM -1000, Tejun Heo wrote:
> SCX needs its balance() invoked even when waking up from a lower priority
> sched class (idle) and put_prev_task_balance() thus has the logic to promote
> @start_class if it's lower than ext_sched_class. This is only needed when
> SCX is enabled. Add scx_enabled() test to avoid unnecessary overhead when
> SCX is disabled.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 3/6] sched_ext: Use update_curr_common() in update_curr_scx()
2024-08-04 2:40 ` [PATCH 3/6] sched_ext: Use update_curr_common() in update_curr_scx() Tejun Heo
@ 2024-08-05 18:23 ` David Vernet
0 siblings, 0 replies; 18+ messages in thread
From: David Vernet @ 2024-08-05 18:23 UTC (permalink / raw)
To: Tejun Heo; +Cc: peterz, linux-kernel, kernel-team, mingo
[-- Attachment #1: Type: text/plain, Size: 333 bytes --]
On Sat, Aug 03, 2024 at 04:40:10PM -1000, Tejun Heo wrote:
> update_curr_scx() is open coding runtime updates. Use update_curr_common()
> instead and avoid unnecessary deviations.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 4/6] sched_ext: Simplify UP support by enabling sched_class->balance() in UP
2024-08-04 2:40 ` [PATCH 4/6] sched_ext: Simplify UP support by enabling sched_class->balance() in UP Tejun Heo
@ 2024-08-05 19:49 ` David Vernet
0 siblings, 0 replies; 18+ messages in thread
From: David Vernet @ 2024-08-05 19:49 UTC (permalink / raw)
To: Tejun Heo; +Cc: peterz, linux-kernel, kernel-team, mingo
[-- Attachment #1: Type: text/plain, Size: 605 bytes --]
On Sat, Aug 03, 2024 at 04:40:11PM -1000, Tejun Heo wrote:
> On SMP, SCX performs dispatch from sched_class->balance(). As balance() was
> not available in UP, it instead called the internal balance function from
> put_prev_task_scx() and pick_next_task_scx() to emulate the effect, which is
> rather nasty.
>
> Enabling sched_class->balance() on UP shouldn't cause any meaningful
> overhead. Enable balance() on UP and drop the ugly workaround.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 5/6] sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked()
2024-08-04 2:40 ` [PATCH 5/6] sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked() Tejun Heo
@ 2024-08-05 19:50 ` David Vernet
0 siblings, 0 replies; 18+ messages in thread
From: David Vernet @ 2024-08-05 19:50 UTC (permalink / raw)
To: Tejun Heo; +Cc: peterz, linux-kernel, kernel-team, mingo
[-- Attachment #1: Type: text/plain, Size: 493 bytes --]
On Sat, Aug 03, 2024 at 04:40:12PM -1000, Tejun Heo wrote:
> scx_task_iter_next_locked() skips tasks whose sched_class is
> idle_sched_class. While it has a short comment explaining why it's testing
> the sched_class directly isntead of using is_idle_task(), the comment
> doesn't sufficiently explain what's going on and why. Improve the comment.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu()
2024-08-04 2:40 ` [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu() Tejun Heo
@ 2024-08-05 19:55 ` David Vernet
2024-08-06 8:12 ` Peter Zijlstra
2024-08-06 19:39 ` [PATCH v2 " Tejun Heo
2 siblings, 0 replies; 18+ messages in thread
From: David Vernet @ 2024-08-05 19:55 UTC (permalink / raw)
To: Tejun Heo; +Cc: peterz, linux-kernel, kernel-team, mingo
[-- Attachment #1: Type: text/plain, Size: 923 bytes --]
On Sat, Aug 03, 2024 at 04:40:13PM -1000, Tejun Heo wrote:
> task_can_run_on_remote_rq() is similar to is_cpu_allowed() but there are
> subtle differences. It currently open codes all the tests. This is
> cumbersome to understand and error-prone in case the intersecting tests need
> to be updated.
>
> Factor out the common part - testing whether the task is allowed on the CPU
> at all regardless of the CPU state - into task_allowed_on_cpu() and make
> both is_cpu_allowed() and SCX's task_can_run_on_remote_rq() use it. As the
> code is now linked between the two and each contains only the extra tests
> that differ between them, it's less error-prone when the conditions need to
> be updated. Also, improve the comment to explain why they are different.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu()
2024-08-04 2:40 ` [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu() Tejun Heo
2024-08-05 19:55 ` David Vernet
@ 2024-08-06 8:12 ` Peter Zijlstra
2024-08-06 17:04 ` Tejun Heo
2024-08-06 19:39 ` [PATCH v2 " Tejun Heo
2 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2024-08-06 8:12 UTC (permalink / raw)
To: Tejun Heo; +Cc: void, linux-kernel, kernel-team, mingo
On Sat, Aug 03, 2024 at 04:40:13PM -1000, Tejun Heo wrote:
> task_can_run_on_remote_rq() is similar to is_cpu_allowed() but there are
> subtle differences. It currently open codes all the tests. This is
> cumbersome to understand and error-prone in case the intersecting tests need
> to be updated.
>
> Factor out the common part - testing whether the task is allowed on the CPU
> at all regardless of the CPU state - into task_allowed_on_cpu() and make
> both is_cpu_allowed() and SCX's task_can_run_on_remote_rq() use it. As the
> code is now linked between the two and each contains only the extra tests
> that differ between them, it's less error-prone when the conditions need to
> be updated. Also, improve the comment to explain why they are different.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> ---
> kernel/sched/core.c | 4 ++--
> kernel/sched/ext.c | 21 ++++++++++++++++-----
> kernel/sched/sched.h | 18 ++++++++++++++++++
> 3 files changed, 36 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index d2ccc2c4b4d3..3c22d0c8eed1 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2311,7 +2311,7 @@ static inline bool rq_has_pinned_tasks(struct rq *rq)
> static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
> {
> /* When not in the task's cpumask, no point in looking further. */
> - if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> + if (!task_allowed_on_cpu(p, cpu))
> return false;
>
> /* migrate_disabled() must be allowed to finish. */
> @@ -2320,7 +2320,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
>
> /* Non kernel threads are not allowed during either online or offline. */
> if (!(p->flags & PF_KTHREAD))
> - return cpu_active(cpu) && task_cpu_possible(cpu, p);
> + return cpu_active(cpu);
>
> /* KTHREAD_IS_PER_CPU is always allowed. */
> if (kthread_is_per_cpu(p))
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index 7837a551022c..60a7eb7d8a9e 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -2224,19 +2224,30 @@ static void consume_local_task(struct rq *rq, struct scx_dispatch_q *dsq,
>
> #ifdef CONFIG_SMP
> /*
> - * Similar to kernel/sched/core.c::is_cpu_allowed() but we're testing whether @p
> - * can be pulled to @rq.
> + * Similar to kernel/sched/core.c::is_cpu_allowed(). However, there are two
> + * differences:
> + *
> + * - is_cpu_allowed() asks "Can this task run on this CPU?" while
> + * task_can_run_on_remote_rq() asks "Can the BPF scheduler migrate the task to
> + * this CPU?".
> + *
> + * While migration is disabled, is_cpu_allowed() has to say "yes" as the task
> + * must be allowed to finish on the CPU that it's currently on regardless of
> + * the CPU state. However, task_can_run_on_remote_rq() must say "no" as the
> + * BPF scheduler shouldn't attempt to migrate a task which has migration
> + * disabled.
> + *
> + * - The BPF scheduler is bypassed while the rq is offline and we can always say
> + * no to the BPF scheduler initiated migrations while offline.
> */
> static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq)
> {
> int cpu = cpu_of(rq);
>
> - if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> + if (!task_allowed_on_cpu(p, cpu))
> return false;
> if (unlikely(is_migration_disabled(p)))
> return false;
> - if (!(p->flags & PF_KTHREAD) && unlikely(!task_cpu_possible(cpu, p)))
> - return false;
> if (!scx_rq_online(rq))
> return false;
> return true;
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 9b88a46d3fce..2b369d8a36b1 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2530,6 +2530,19 @@ extern void sched_balance_trigger(struct rq *rq);
> extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
> extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);
>
> +extern inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
This wants to be "static inline". no? I think we try and avoid "extern
inline".
> +{
> + /* When not in the task's cpumask, no point in looking further. */
> + if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> + return false;
> +
> + /* Can @cpu run a user thread? */
> + if (!(p->flags & PF_KTHREAD) && !task_cpu_possible(cpu, p))
> + return false;
> +
> + return true;
> +}
> +
> static inline cpumask_t *alloc_user_cpus_ptr(int node)
> {
> /*
> @@ -2563,6 +2576,11 @@ extern int push_cpu_stop(void *arg);
>
> #else /* !CONFIG_SMP: */
>
> +static inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
> +{
> + return true;
> +}
> +
> static inline int __set_cpus_allowed_ptr(struct task_struct *p,
> struct affinity_context *ctx)
> {
> --
> 2.46.0
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
` (5 preceding siblings ...)
2024-08-04 2:40 ` [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu() Tejun Heo
@ 2024-08-06 8:13 ` Peter Zijlstra
2024-08-06 19:39 ` Tejun Heo
7 siblings, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2024-08-06 8:13 UTC (permalink / raw)
To: Tejun Heo; +Cc: void, linux-kernel, kernel-team, mingo
On Sat, Aug 03, 2024 at 04:40:07PM -1000, Tejun Heo wrote:
> Misc updates mostly implementing Peter's feedbacks from the following
> thread:
>
> http://lkml.kernel.org/r/20240723163358.GM26750@noisy.programming.kicks-ass.net
>
> This patchset contains the following patches:
>
> 0001-sched_ext-Simplify-scx_can_stop_tick-invocation-in-s.patch
> 0002-sched_ext-Add-scx_enabled-test-to-start_class-promot.patch
> 0003-sched_ext-Use-update_curr_common-in-update_curr_scx.patch
> 0004-sched_ext-Simplify-UP-support-by-enabling-sched_clas.patch
> 0005-sched_ext-Improve-comment-on-idle_sched_class-except.patch
> 0006-sched_ext-Make-task_can_run_on_remote_rq-use-common-.patch
Aside of that one nice in the last patch these look good. Thanks!
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu()
2024-08-06 8:12 ` Peter Zijlstra
@ 2024-08-06 17:04 ` Tejun Heo
0 siblings, 0 replies; 18+ messages in thread
From: Tejun Heo @ 2024-08-06 17:04 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: void, linux-kernel, kernel-team, mingo
On Tue, Aug 06, 2024 at 10:12:59AM +0200, Peter Zijlstra wrote:
> > +extern inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
>
> This wants to be "static inline". no? I think we try and avoid "extern
> inline".
Oh yeah, definitely. Will fix it up.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu()
2024-08-04 2:40 ` [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu() Tejun Heo
2024-08-05 19:55 ` David Vernet
2024-08-06 8:12 ` Peter Zijlstra
@ 2024-08-06 19:39 ` Tejun Heo
2 siblings, 0 replies; 18+ messages in thread
From: Tejun Heo @ 2024-08-06 19:39 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo
task_can_run_on_remote_rq() is similar to is_cpu_allowed() but there are
subtle differences. It currently open codes all the tests. This is
cumbersome to understand and error-prone in case the intersecting tests need
to be updated.
Factor out the common part - testing whether the task is allowed on the CPU
at all regardless of the CPU state - into task_allowed_on_cpu() and make
both is_cpu_allowed() and SCX's task_can_run_on_remote_rq() use it. As the
code is now linked between the two and each contains only the extra tests
that differ between them, it's less error-prone when the conditions need to
be updated. Also, improve the comment to explain why they are different.
v2: Replace accidental "extern inline" with "static inline" (Peter).
Signed-off-by: Tejun Heo <tj@kernel.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: David Vernet <void@manifault.com>
---
kernel/sched/core.c | 4 ++--
kernel/sched/ext.c | 21 ++++++++++++++++-----
kernel/sched/sched.h | 18 ++++++++++++++++++
3 files changed, 36 insertions(+), 7 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2311,7 +2311,7 @@ static inline bool rq_has_pinned_tasks(s
static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
{
/* When not in the task's cpumask, no point in looking further. */
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ if (!task_allowed_on_cpu(p, cpu))
return false;
/* migrate_disabled() must be allowed to finish. */
@@ -2320,7 +2320,7 @@ static inline bool is_cpu_allowed(struct
/* Non kernel threads are not allowed during either online or offline. */
if (!(p->flags & PF_KTHREAD))
- return cpu_active(cpu) && task_cpu_possible(cpu, p);
+ return cpu_active(cpu);
/* KTHREAD_IS_PER_CPU is always allowed. */
if (kthread_is_per_cpu(p))
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2224,19 +2224,30 @@ static void consume_local_task(struct rq
#ifdef CONFIG_SMP
/*
- * Similar to kernel/sched/core.c::is_cpu_allowed() but we're testing whether @p
- * can be pulled to @rq.
+ * Similar to kernel/sched/core.c::is_cpu_allowed(). However, there are two
+ * differences:
+ *
+ * - is_cpu_allowed() asks "Can this task run on this CPU?" while
+ * task_can_run_on_remote_rq() asks "Can the BPF scheduler migrate the task to
+ * this CPU?".
+ *
+ * While migration is disabled, is_cpu_allowed() has to say "yes" as the task
+ * must be allowed to finish on the CPU that it's currently on regardless of
+ * the CPU state. However, task_can_run_on_remote_rq() must say "no" as the
+ * BPF scheduler shouldn't attempt to migrate a task which has migration
+ * disabled.
+ *
+ * - The BPF scheduler is bypassed while the rq is offline and we can always say
+ * no to the BPF scheduler initiated migrations while offline.
*/
static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq)
{
int cpu = cpu_of(rq);
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ if (!task_allowed_on_cpu(p, cpu))
return false;
if (unlikely(is_migration_disabled(p)))
return false;
- if (!(p->flags & PF_KTHREAD) && unlikely(!task_cpu_possible(cpu, p)))
- return false;
if (!scx_rq_online(rq))
return false;
return true;
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2530,6 +2530,19 @@ extern void sched_balance_trigger(struct
extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);
+static inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
+{
+ /* When not in the task's cpumask, no point in looking further. */
+ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ return false;
+
+ /* Can @cpu run a user thread? */
+ if (!(p->flags & PF_KTHREAD) && !task_cpu_possible(cpu, p))
+ return false;
+
+ return true;
+}
+
static inline cpumask_t *alloc_user_cpus_ptr(int node)
{
/*
@@ -2563,6 +2576,11 @@ extern int push_cpu_stop(void *arg);
#else /* !CONFIG_SMP: */
+static inline bool task_allowed_on_cpu(struct task_struct *p, int cpu)
+{
+ return true;
+}
+
static inline int __set_cpus_allowed_ptr(struct task_struct *p,
struct affinity_context *ctx)
{
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
` (6 preceding siblings ...)
2024-08-06 8:13 ` [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Peter Zijlstra
@ 2024-08-06 19:39 ` Tejun Heo
7 siblings, 0 replies; 18+ messages in thread
From: Tejun Heo @ 2024-08-06 19:39 UTC (permalink / raw)
To: void, peterz; +Cc: linux-kernel, kernel-team, mingo
On Sat, Aug 03, 2024 at 04:40:07PM -1000, Tejun Heo wrote:
> Misc updates mostly implementing Peter's feedbacks from the following
> thread:
>
> http://lkml.kernel.org/r/20240723163358.GM26750@noisy.programming.kicks-ass.net
>
> This patchset contains the following patches:
>
> 0001-sched_ext-Simplify-scx_can_stop_tick-invocation-in-s.patch
> 0002-sched_ext-Add-scx_enabled-test-to-start_class-promot.patch
> 0003-sched_ext-Use-update_curr_common-in-update_curr_scx.patch
> 0004-sched_ext-Simplify-UP-support-by-enabling-sched_clas.patch
> 0005-sched_ext-Improve-comment-on-idle_sched_class-except.patch
> 0006-sched_ext-Make-task_can_run_on_remote_rq-use-common-.patch
Applied 1-6 (0006 updated to v2) to sched_ext/for-6.12.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2024-08-06 19:39 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-04 2:40 [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Tejun Heo
2024-08-04 2:40 ` [PATCH 1/6] sched_ext: Simplify scx_can_stop_tick() invocation in sched_can_stop_tick() Tejun Heo
2024-08-05 17:55 ` David Vernet
2024-08-04 2:40 ` [PATCH 2/6] sched_ext: Add scx_enabled() test to @start_class promotion in put_prev_task_balance() Tejun Heo
2024-08-05 17:57 ` David Vernet
2024-08-04 2:40 ` [PATCH 3/6] sched_ext: Use update_curr_common() in update_curr_scx() Tejun Heo
2024-08-05 18:23 ` David Vernet
2024-08-04 2:40 ` [PATCH 4/6] sched_ext: Simplify UP support by enabling sched_class->balance() in UP Tejun Heo
2024-08-05 19:49 ` David Vernet
2024-08-04 2:40 ` [PATCH 5/6] sched_ext: Improve comment on idle_sched_class exception in scx_task_iter_next_locked() Tejun Heo
2024-08-05 19:50 ` David Vernet
2024-08-04 2:40 ` [PATCH 6/6] sched_ext: Make task_can_run_on_remote_rq() use common task_allowed_on_cpu() Tejun Heo
2024-08-05 19:55 ` David Vernet
2024-08-06 8:12 ` Peter Zijlstra
2024-08-06 17:04 ` Tejun Heo
2024-08-06 19:39 ` [PATCH v2 " Tejun Heo
2024-08-06 8:13 ` [PATCHSET sched_ext/for-6.12] sched_ext: Misc updates Peter Zijlstra
2024-08-06 19:39 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox