public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq()
@ 2024-08-30 10:51 Tejun Heo
  2024-08-30 10:52 ` [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched() Tejun Heo
  2024-08-30 17:22 ` [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() David Vernet
  0 siblings, 2 replies; 10+ messages in thread
From: Tejun Heo @ 2024-08-30 10:51 UTC (permalink / raw)
  To: David Vernet; +Cc: linux-kernel, Peter Zijlstra, kernel-team

When deciding whether a task can be migrated to a CPU,
dispatch_to_local_dsq() was open-coding p->cpus_allowed and scx_rq_online()
tests instead of using task_can_run_on_remote_rq(). This had two problems.

- It was missing is_migration_disabled() check and thus could try to migrate
  a task which shouldn't leading to assertion and scheduling failures.

- It was testing p->cpus_ptr directly instead of using task_allowed_on_cpu()
  and thus failed to consider ISA compatibility.

Update dispatch_to_local_dsq() to use task_can_run_on_remote_rq():

- Move scx_ops_error() triggering into task_can_run_on_remote_rq().

- When migration isn't allowed, fall back to the global DSQ instead of the
  source DSQ by returning DTL_INVALID. This is both simpler and an overall
  better behavior.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/ext.c |   40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2203,16 +2203,30 @@ static void consume_local_task(struct rq
  * - The BPF scheduler is bypassed while the rq is offline and we can always say
  *   no to the BPF scheduler initiated migrations while offline.
  */
-static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq)
+static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,
+				      bool trigger_error)
 {
 	int cpu = cpu_of(rq);
 
-	if (!task_allowed_on_cpu(p, cpu))
+	/*
+	 * We don't require the BPF scheduler to avoid dispatching to offline
+	 * CPUs mostly for convenience but also because CPUs can go offline
+	 * between scx_bpf_dispatch() calls and here. Trigger error iff the
+	 * picked CPU is outside the allowed mask.
+	 */
+	if (!task_allowed_on_cpu(p, cpu)) {
+		if (trigger_error)
+			scx_ops_error("SCX_DSQ_LOCAL[_ON] verdict target cpu %d not allowed for %s[%d]",
+				      cpu_of(rq), p->comm, p->pid);
 		return false;
+	}
+
 	if (unlikely(is_migration_disabled(p)))
 		return false;
+
 	if (!scx_rq_online(rq))
 		return false;
+
 	return true;
 }
 
@@ -2240,9 +2254,8 @@ static bool consume_remote_task(struct r
 	return move_task_to_local_dsq(p, 0, task_rq, rq);
 }
 #else	/* CONFIG_SMP */
-static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq) { return false; }
-static bool consume_remote_task(struct rq *rq, struct scx_dispatch_q *dsq,
-				struct task_struct *p, struct rq *task_rq) { return false; }
+static inline bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq, bool trigger_error) { return false; }
+static inline bool consume_remote_task(struct rq *rq, struct scx_dispatch_q *dsq, struct task_struct *p, struct rq *task_rq) { return false; }
 #endif	/* CONFIG_SMP */
 
 static bool consume_dispatch_q(struct rq *rq, struct scx_dispatch_q *dsq)
@@ -2267,7 +2280,7 @@ retry:
 			return true;
 		}
 
-		if (task_can_run_on_remote_rq(p, rq)) {
+		if (task_can_run_on_remote_rq(p, rq, false)) {
 			if (likely(consume_remote_task(rq, dsq, p, task_rq)))
 				return true;
 			goto retry;
@@ -2330,7 +2343,7 @@ dispatch_to_local_dsq(struct rq *rq, u64
 	}
 
 #ifdef CONFIG_SMP
-	if (cpumask_test_cpu(cpu_of(dst_rq), p->cpus_ptr)) {
+	if (likely(task_can_run_on_remote_rq(p, dst_rq, true))) {
 		bool dsp;
 
 		/*
@@ -2355,17 +2368,6 @@ dispatch_to_local_dsq(struct rq *rq, u64
 			raw_spin_rq_lock(src_rq);
 		}
 
-		/*
-		 * We don't require the BPF scheduler to avoid dispatching to
-		 * offline CPUs mostly for convenience but also because CPUs can
-		 * go offline between scx_bpf_dispatch() calls and here. If @p
-		 * is destined to an offline CPU, queue it on its current CPU
-		 * instead, which should always be safe. As this is an allowed
-		 * behavior, don't trigger an ops error.
-		 */
-		if (!scx_rq_online(dst_rq))
-			dst_rq = src_rq;
-
 		if (src_rq == dst_rq) {
 			/*
 			 * As @p is staying on the same rq, there's no need to
@@ -2399,8 +2401,6 @@ dispatch_to_local_dsq(struct rq *rq, u64
 	}
 #endif	/* CONFIG_SMP */
 
-	scx_ops_error("SCX_DSQ_LOCAL[_ON] verdict target cpu %d not allowed for %s[%d]",
-		      cpu_of(dst_rq), p->comm, p->pid);
 	return DTL_INVALID;
 }
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 10:51 [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() Tejun Heo
@ 2024-08-30 10:52 ` Tejun Heo
  2024-08-30 17:40   ` David Vernet
  2024-08-30 17:54   ` [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() " Tejun Heo
  2024-08-30 17:22 ` [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() David Vernet
  1 sibling, 2 replies; 10+ messages in thread
From: Tejun Heo @ 2024-08-30 10:52 UTC (permalink / raw)
  To: David Vernet; +Cc: linux-kernel, Peter Zijlstra, kernel-team

Since sched_ext: Unpin and repin rq lock from balance_scx(), sched_ext's
balance path terminates rq_pin in the outermost function. This is simpler
and in line with what other balance functions are doing but it loses control
over rq->clock_update_flags which makes assert_clock_udpated() trigger if
other CPUs pins the rq lock.

The only place this matters is touch_core_sched() which uses the timestamp
to order tasks from sibling rq's. For now, switch to ktime_get_ns(). Later,
it'd be better to use per-core dispatch sequence number.

Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/ext.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1453,13 +1453,20 @@ static void schedule_deferred(struct rq
  */
 static void touch_core_sched(struct rq *rq, struct task_struct *p)
 {
+	lockdep_assert_rq_held(rq);
+
 #ifdef CONFIG_SCHED_CORE
 	/*
 	 * It's okay to update the timestamp spuriously. Use
 	 * sched_core_disabled() which is cheaper than enabled().
+	 *
+	 * TODO: ktime_get_ns() is used because rq_clock_task() can't be used as
+	 * SCX balance path doesn't pin the rq. As this is used to determine
+	 * ordering between tasks of sibling CPUs, it'd be better to use
+	 * per-core dispatch sequence instead.
 	 */
 	if (!sched_core_disabled())
-		p->scx.core_sched_at = rq_clock_task(rq);
+		p->scx.core_sched_at = ktime_get_ns();
 #endif
 }
 
@@ -1476,7 +1483,6 @@ static void touch_core_sched(struct rq *
 static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
 {
 	lockdep_assert_rq_held(rq);
-	assert_clock_updated(rq);
 
 #ifdef CONFIG_SCHED_CORE
 	if (SCX_HAS_OP(core_sched_before))

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq()
  2024-08-30 10:51 [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() Tejun Heo
  2024-08-30 10:52 ` [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched() Tejun Heo
@ 2024-08-30 17:22 ` David Vernet
  2024-08-30 17:35   ` Tejun Heo
  1 sibling, 1 reply; 10+ messages in thread
From: David Vernet @ 2024-08-30 17:22 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Peter Zijlstra, kernel-team

[-- Attachment #1: Type: text/plain, Size: 1442 bytes --]

On Fri, Aug 30, 2024 at 12:51:40AM -1000, Tejun Heo wrote:
> When deciding whether a task can be migrated to a CPU,
> dispatch_to_local_dsq() was open-coding p->cpus_allowed and scx_rq_online()
> tests instead of using task_can_run_on_remote_rq(). This had two problems.
> 
> - It was missing is_migration_disabled() check and thus could try to migrate
>   a task which shouldn't leading to assertion and scheduling failures.
> 
> - It was testing p->cpus_ptr directly instead of using task_allowed_on_cpu()
>   and thus failed to consider ISA compatibility.
> 
> Update dispatch_to_local_dsq() to use task_can_run_on_remote_rq():
> 
> - Move scx_ops_error() triggering into task_can_run_on_remote_rq().
> 
> - When migration isn't allowed, fall back to the global DSQ instead of the
>   source DSQ by returning DTL_INVALID. This is both simpler and an overall
>   better behavior.

Should we also be falling back to the global DSQ if we fail the check
when called from process_ddsp_deferred_locals()?  This patch doesn't
change anything given that we'd have the same behavior before if we
failed the cpumask_test_cpu(cpu_of(dst_rq), p->cpus_ptr) check, but I'm
not following why we would need to fall back to global DSQ in
finish_dispatch(), but not in process_ddsp_deferred_locals().

This doesn't affect the rest of the cleanup + fix, which LGTM:

Acked-by: David Vernet <void@manifault.com>

Thanks,
David

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq()
  2024-08-30 17:22 ` [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() David Vernet
@ 2024-08-30 17:35   ` Tejun Heo
  0 siblings, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2024-08-30 17:35 UTC (permalink / raw)
  To: David Vernet; +Cc: linux-kernel, Peter Zijlstra, kernel-team

On Fri, Aug 30, 2024 at 12:22:07PM -0500, David Vernet wrote:
> On Fri, Aug 30, 2024 at 12:51:40AM -1000, Tejun Heo wrote:
> > When deciding whether a task can be migrated to a CPU,
> > dispatch_to_local_dsq() was open-coding p->cpus_allowed and scx_rq_online()
> > tests instead of using task_can_run_on_remote_rq(). This had two problems.
> > 
> > - It was missing is_migration_disabled() check and thus could try to migrate
> >   a task which shouldn't leading to assertion and scheduling failures.
> > 
> > - It was testing p->cpus_ptr directly instead of using task_allowed_on_cpu()
> >   and thus failed to consider ISA compatibility.
> > 
> > Update dispatch_to_local_dsq() to use task_can_run_on_remote_rq():
> > 
> > - Move scx_ops_error() triggering into task_can_run_on_remote_rq().
> > 
> > - When migration isn't allowed, fall back to the global DSQ instead of the
> >   source DSQ by returning DTL_INVALID. This is both simpler and an overall
> >   better behavior.
> 
> Should we also be falling back to the global DSQ if we fail the check
> when called from process_ddsp_deferred_locals()?  This patch doesn't
> change anything given that we'd have the same behavior before if we
> failed the cpumask_test_cpu(cpu_of(dst_rq), p->cpus_ptr) check, but I'm
> not following why we would need to fall back to global DSQ in
> finish_dispatch(), but not in process_ddsp_deferred_locals().

Yes, this actually happens as a part of the scx_bpf_dispatch_from_dsq()
patchset - 0004-sched_ext-Make-dispatch_to_local_dsq-return-void.patch. I'll
update the title / description of that patchset.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 10:52 ` [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched() Tejun Heo
@ 2024-08-30 17:40   ` David Vernet
  2024-08-30 17:45     ` Tejun Heo
  2024-09-02  9:59     ` Peter Zijlstra
  2024-08-30 17:54   ` [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() " Tejun Heo
  1 sibling, 2 replies; 10+ messages in thread
From: David Vernet @ 2024-08-30 17:40 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Peter Zijlstra, kernel-team

[-- Attachment #1: Type: text/plain, Size: 2204 bytes --]

On Fri, Aug 30, 2024 at 12:52:48AM -1000, Tejun Heo wrote:
> Since sched_ext: Unpin and repin rq lock from balance_scx(), sched_ext's
> balance path terminates rq_pin in the outermost function. This is simpler
> and in line with what other balance functions are doing but it loses control
> over rq->clock_update_flags which makes assert_clock_udpated() trigger if
> other CPUs pins the rq lock.
> 
> The only place this matters is touch_core_sched() which uses the timestamp
> to order tasks from sibling rq's. For now, switch to ktime_get_ns(). Later,
> it'd be better to use per-core dispatch sequence number.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
> Cc: Peter Zijlstra <peterz@infradead.org>
> ---
>  kernel/sched/ext.c |   10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -1453,13 +1453,20 @@ static void schedule_deferred(struct rq
>   */
>  static void touch_core_sched(struct rq *rq, struct task_struct *p)
>  {
> +	lockdep_assert_rq_held(rq);
> +
>  #ifdef CONFIG_SCHED_CORE
>  	/*
>  	 * It's okay to update the timestamp spuriously. Use
>  	 * sched_core_disabled() which is cheaper than enabled().
> +	 *
> +	 * TODO: ktime_get_ns() is used because rq_clock_task() can't be used as
> +	 * SCX balance path doesn't pin the rq. As this is used to determine
> +	 * ordering between tasks of sibling CPUs, it'd be better to use
> +	 * per-core dispatch sequence instead.
>  	 */
>  	if (!sched_core_disabled())
> -		p->scx.core_sched_at = rq_clock_task(rq);
> +		p->scx.core_sched_at = ktime_get_ns();

Should we just use sched_clock_cpu()? That's what rq->clock is updated
from, and it's what fair.c does on the balance path when the rq lock is
unpinned.

Thanks,
David

>  #endif
>  }
>  
> @@ -1476,7 +1483,6 @@ static void touch_core_sched(struct rq *
>  static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
>  {
>  	lockdep_assert_rq_held(rq);
> -	assert_clock_updated(rq);
>  
>  #ifdef CONFIG_SCHED_CORE
>  	if (SCX_HAS_OP(core_sched_before))

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 17:40   ` David Vernet
@ 2024-08-30 17:45     ` Tejun Heo
  2024-09-02  9:59     ` Peter Zijlstra
  1 sibling, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2024-08-30 17:45 UTC (permalink / raw)
  To: David Vernet; +Cc: linux-kernel, Peter Zijlstra, kernel-team

On Fri, Aug 30, 2024 at 12:40:14PM -0500, David Vernet wrote:
> >  	if (!sched_core_disabled())
> > -		p->scx.core_sched_at = rq_clock_task(rq);
> > +		p->scx.core_sched_at = ktime_get_ns();
> 
> Should we just use sched_clock_cpu()? That's what rq->clock is updated
> from, and it's what fair.c does on the balance path when the rq lock is
> unpinned.

That sounds more sensible. Will update.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 10:52 ` [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched() Tejun Heo
  2024-08-30 17:40   ` David Vernet
@ 2024-08-30 17:54   ` Tejun Heo
  2024-08-30 18:01     ` David Vernet
  2024-08-31  5:36     ` Tejun Heo
  1 sibling, 2 replies; 10+ messages in thread
From: Tejun Heo @ 2024-08-30 17:54 UTC (permalink / raw)
  To: David Vernet; +Cc: linux-kernel, Peter Zijlstra, kernel-team

Since 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from
balance_scx()"), sched_ext's balance path terminates rq_pin in the outermost
function. This is simpler and in line with what other balance functions are
doing but it loses control over rq->clock_update_flags which makes
assert_clock_udpated() trigger if other CPUs pins the rq lock.

The only place this matters is touch_core_sched() which uses the timestamp
to order tasks from sibling rq's. Switch to sched_clock_cpu(). Later, it may
be better to use per-core dispatch sequence number.

v2: Use sched_clock_cpu() instead of ktime_get_ns() per David.

Signed-off-by: Tejun Heo <tj@kernel.org>
Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
Cc: David Vernet <void@manifault.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/ext.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -1453,13 +1453,18 @@ static void schedule_deferred(struct rq
  */
 static void touch_core_sched(struct rq *rq, struct task_struct *p)
 {
+	lockdep_assert_rq_held(rq);
+
 #ifdef CONFIG_SCHED_CORE
 	/*
 	 * It's okay to update the timestamp spuriously. Use
 	 * sched_core_disabled() which is cheaper than enabled().
+	 *
+	 * As this is used to determine ordering between tasks of sibling CPUs,
+	 * it may be better to use per-core dispatch sequence instead.
 	 */
 	if (!sched_core_disabled())
-		p->scx.core_sched_at = rq_clock_task(rq);
+		p->scx.core_sched_at = sched_clock_cpu(cpu_of(rq));
 #endif
 }
 
@@ -1476,7 +1481,6 @@ static void touch_core_sched(struct rq *
 static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
 {
 	lockdep_assert_rq_held(rq);
-	assert_clock_updated(rq);
 
 #ifdef CONFIG_SCHED_CORE
 	if (SCX_HAS_OP(core_sched_before))

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 17:54   ` [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() " Tejun Heo
@ 2024-08-30 18:01     ` David Vernet
  2024-08-31  5:36     ` Tejun Heo
  1 sibling, 0 replies; 10+ messages in thread
From: David Vernet @ 2024-08-30 18:01 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Peter Zijlstra, kernel-team

[-- Attachment #1: Type: text/plain, Size: 968 bytes --]

On Fri, Aug 30, 2024 at 07:54:41AM -1000, Tejun Heo wrote:
> Since 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from
> balance_scx()"), sched_ext's balance path terminates rq_pin in the outermost
> function. This is simpler and in line with what other balance functions are
> doing but it loses control over rq->clock_update_flags which makes
> assert_clock_udpated() trigger if other CPUs pins the rq lock.
> 
> The only place this matters is touch_core_sched() which uses the timestamp
> to order tasks from sibling rq's. Switch to sched_clock_cpu(). Later, it may
> be better to use per-core dispatch sequence number.
> 
> v2: Use sched_clock_cpu() instead of ktime_get_ns() per David.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
> Cc: David Vernet <void@manifault.com>
> Cc: Peter Zijlstra <peterz@infradead.org>

Acked-by: David Vernet <void@manifault.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 17:54   ` [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() " Tejun Heo
  2024-08-30 18:01     ` David Vernet
@ 2024-08-31  5:36     ` Tejun Heo
  1 sibling, 0 replies; 10+ messages in thread
From: Tejun Heo @ 2024-08-31  5:36 UTC (permalink / raw)
  To: David Vernet; +Cc: linux-kernel, Peter Zijlstra, kernel-team

On Fri, Aug 30, 2024 at 07:54:41AM -1000, Tejun Heo wrote:
> Since 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from
> balance_scx()"), sched_ext's balance path terminates rq_pin in the outermost
> function. This is simpler and in line with what other balance functions are
> doing but it loses control over rq->clock_update_flags which makes
> assert_clock_udpated() trigger if other CPUs pins the rq lock.
> 
> The only place this matters is touch_core_sched() which uses the timestamp
> to order tasks from sibling rq's. Switch to sched_clock_cpu(). Later, it may
> be better to use per-core dispatch sequence number.
> 
> v2: Use sched_clock_cpu() instead of ktime_get_ns() per David.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
> Cc: David Vernet <void@manifault.com>
> Cc: Peter Zijlstra <peterz@infradead.org>

Applied 1-2 to sched_ext/for-6.12.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched()
  2024-08-30 17:40   ` David Vernet
  2024-08-30 17:45     ` Tejun Heo
@ 2024-09-02  9:59     ` Peter Zijlstra
  1 sibling, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2024-09-02  9:59 UTC (permalink / raw)
  To: David Vernet; +Cc: Tejun Heo, linux-kernel, kernel-team

On Fri, Aug 30, 2024 at 12:40:14PM -0500, David Vernet wrote:
> On Fri, Aug 30, 2024 at 12:52:48AM -1000, Tejun Heo wrote:
> > Since sched_ext: Unpin and repin rq lock from balance_scx(), sched_ext's
> > balance path terminates rq_pin in the outermost function. This is simpler
> > and in line with what other balance functions are doing but it loses control
> > over rq->clock_update_flags which makes assert_clock_udpated() trigger if
> > other CPUs pins the rq lock.
> > 
> > The only place this matters is touch_core_sched() which uses the timestamp
> > to order tasks from sibling rq's. For now, switch to ktime_get_ns(). Later,
> > it'd be better to use per-core dispatch sequence number.
> > 
> > Signed-off-by: Tejun Heo <tj@kernel.org>
> > Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > ---
> >  kernel/sched/ext.c |   10 ++++++++--
> >  1 file changed, 8 insertions(+), 2 deletions(-)
> > 
> > --- a/kernel/sched/ext.c
> > +++ b/kernel/sched/ext.c
> > @@ -1453,13 +1453,20 @@ static void schedule_deferred(struct rq
> >   */
> >  static void touch_core_sched(struct rq *rq, struct task_struct *p)
> >  {
> > +	lockdep_assert_rq_held(rq);
> > +
> >  #ifdef CONFIG_SCHED_CORE
> >  	/*
> >  	 * It's okay to update the timestamp spuriously. Use
> >  	 * sched_core_disabled() which is cheaper than enabled().
> > +	 *
> > +	 * TODO: ktime_get_ns() is used because rq_clock_task() can't be used as
> > +	 * SCX balance path doesn't pin the rq. As this is used to determine
> > +	 * ordering between tasks of sibling CPUs, it'd be better to use
> > +	 * per-core dispatch sequence instead.
> >  	 */
> >  	if (!sched_core_disabled())
> > -		p->scx.core_sched_at = rq_clock_task(rq);
> > +		p->scx.core_sched_at = ktime_get_ns();
> 
> Should we just use sched_clock_cpu()? That's what rq->clock is updated
> from, and it's what fair.c does on the balance path when the rq lock is
> unpinned.

Right, so on x86 with wobbly TSC (still possible in this day and age)
ktime *must* use the HPET, while sched_clock_cpu() makes an 'educated'
guess using TSC and tick based HPET stamps and windows.

IOW, on same machines it doesn't matter much, but for the HPET case the
sched_clock() thing is a lot faster.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-09-02  9:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-30 10:51 [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() Tejun Heo
2024-08-30 10:52 ` [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns() instead of rq_clock_task() in touch_core_sched() Tejun Heo
2024-08-30 17:40   ` David Vernet
2024-08-30 17:45     ` Tejun Heo
2024-09-02  9:59     ` Peter Zijlstra
2024-08-30 17:54   ` [PATCH v2 2/2 sched_ext/for-6.12] sched_ext: Use sched_clock_cpu() " Tejun Heo
2024-08-30 18:01     ` David Vernet
2024-08-31  5:36     ` Tejun Heo
2024-08-30 17:22 ` [PATCH 1/2 sched_ext/for-6.12] sched_ext: Use task_can_run_on_remote_rq() test in dispatch_to_local_dsq() David Vernet
2024-08-30 17:35   ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox