From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Chen Yu <yu.c.chen@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Juri Lelli <juri.lelli@redhat.com>
Cc: Tim Chen <tim.c.chen@intel.com>, Aaron Lu <aaron.lu@intel.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Daniel Bristot de Oliveira <bristot@redhat.com>,
Valentin Schneider <vschneid@redhat.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 2/2] sched/fair: skip the cache hot CPU in select_idle_cpu()
Date: Mon, 11 Sep 2023 11:26:50 -0400 [thread overview]
Message-ID: <30a7ff14-3f48-e8cf-333f-cbb7499656e3@efficios.com> (raw)
In-Reply-To: <d49cf5748aa7c6d69580315d2373a9eafa21c21f.1694397335.git.yu.c.chen@intel.com>
On 9/10/23 22:50, Chen Yu wrote:
> When task p is woken up, the scheduler leverages select_idle_sibling()
> to find an idle CPU for it. p's previous CPU is usually a preference
> because it can improve cache locality. However in many cases the
> previous CPU has already been taken by other wakees, thus p has to
> find another idle CPU.
>
> Inspired by Mathieu's idea[1], consider the sleep time of the task.
> If that task is a short sleeping one, keep p's previous CPU idle
> for a short while. Later when p is woken up, it can choose its
> previous CPU in select_idle_sibling(). When p's previous CPU is reserved,
> other wakee is not allowed to choose this CPU in select_idle_idle().
> The reservation period is set to the task's average sleep time. That
> is to say, if p is a short sleeping task, there is no need to migrate
> p to another idle CPU.
>
> This does not break the work conservation of the scheduler,
> because wakee will still try its best to find an idle CPU.
> The difference is that, different idle CPUs might have different
> priorities. On the other hand, in theory this extra check could
> increase the failure ratio of select_idle_cpu(), but per the
> initial test result, no regression is detected.
>
> Baseline: tip/sched/core, on top of:
> Commit 3f4feb58037a ("sched: Misc cleanups")
>
> Benchmark results on Intel Sapphire Rapids, 112 CPUs/socket, 2 sockets.
> cpufreq governor is performance, turbo boost is disabled, C-states deeper
> than C1 are disabled, Numa balancing is disabled.
>
> netperf
> =======
> case load baseline(std%) compare%( std%)
> UDP_RR 56-threads 1.00 ( 1.34) +1.05 ( 1.04)
> UDP_RR 112-threads 1.00 ( 7.94) -0.68 ( 14.42)
> UDP_RR 168-threads 1.00 ( 33.17) +49.63 ( 5.96)
> UDP_RR 224-threads 1.00 ( 13.52) +122.53 ( 18.50)
>
> Noticeable improvements of netperf is observed in 168 and 224 threads
> cases.
>
> hackbench
> =========
> case load baseline(std%) compare%( std%)
> process-pipe 1-groups 1.00 ( 5.61) -4.69 ( 1.48)
> process-pipe 2-groups 1.00 ( 8.74) -0.24 ( 3.10)
> process-pipe 4-groups 1.00 ( 3.52) +1.61 ( 4.41)
> process-sockets 1-groups 1.00 ( 4.73) +2.32 ( 0.95)
> process-sockets 2-groups 1.00 ( 1.27) -3.29 ( 0.97)
> process-sockets 4-groups 1.00 ( 0.09) +0.24 ( 0.09)
> threads-pipe 1-groups 1.00 ( 10.44) -5.88 ( 1.49)
> threads-pipe 2-groups 1.00 ( 19.15) +5.31 ( 12.90)
> threads-pipe 4-groups 1.00 ( 1.74) -5.01 ( 6.44)
> threads-sockets 1-groups 1.00 ( 1.58) -1.79 ( 0.43)
> threads-sockets 2-groups 1.00 ( 1.19) -8.43 ( 6.91)
> threads-sockets 4-groups 1.00 ( 0.10) -0.09 ( 0.07)
>
> schbench(old)
> ========
> case load baseline(std%) compare%( std%)
> normal 1-mthreads 1.00 ( 0.63) +1.28 ( 0.37)
> normal 2-mthreads 1.00 ( 8.33) +1.58 ( 2.83)
> normal 4-mthreads 1.00 ( 2.48) -2.98 ( 3.28)
> normal 8-mthreads 1.00 ( 3.97) +5.01 ( 1.28)
>
> No much difference is observed in hackbench/schbench, due to the
> run-to-run variance.
>
> Link: https://lore.kernel.org/lkml/20230905171105.1005672-2-mathieu.desnoyers@efficios.com/ #1
> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
> kernel/sched/fair.c | 30 +++++++++++++++++++++++++++---
> kernel/sched/features.h | 1 +
> kernel/sched/sched.h | 1 +
> 3 files changed, 29 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e20f50726ab8..fe3b760c9654 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6629,6 +6629,21 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> hrtick_update(rq);
> now = sched_clock_cpu(cpu_of(rq));
> p->se.prev_sleep_time = task_sleep ? now : 0;
> +#ifdef CONFIG_SMP
> + /*
> + * If this rq will become idle, and dequeued task is
> + * a short sleeping one, check if we can reserve
> + * this idle CPU for that task for a short while.
> + * During this reservation period, other wakees will
> + * skip this 'idle' CPU in select_idle_cpu(), and this
> + * short sleeping task can pick its previous CPU in
> + * select_idle_sibling(), which brings better cache
> + * locality.
> + */
> + if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running &&
> + p->se.sleep_avg && p->se.sleep_avg < sysctl_sched_migration_cost)
> + rq->cache_hot_timeout = now + p->se.sleep_avg;
This is really cool!
There is one scenario that worries me with this approach: workloads
that sleep for a long time and then have short blocked periods.
Those bursts will likely bring the average to values too high
to stay below sysctl_sched_migration_cost.
I wonder if changing the code above for the following would help ?
if (sched_feat(SIS_CACHE) && task_sleep && !rq->nr_running && p->se.sleep_avg)
rq->cache_hot_timeout = now + min(sysctl_sched_migration_cost, p->se.sleep_avg);
For tasks that have a large sleep_avg, it would activate this rq
"appear as not idle for rq selection" scheme for a window of
sysctl_sched_migration_cost. If the sleep ends up being a long one,
preventing other tasks from being migrated to this rq for a tiny
window should not matter performance-wise. I would expect that it
could help workloads that come in bursts.
Thanks,
Mathieu
> +#endif
> }
>
> #ifdef CONFIG_SMP
> @@ -6982,8 +6997,13 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
> static inline int __select_idle_cpu(int cpu, struct task_struct *p)
> {
> if ((available_idle_cpu(cpu) || sched_idle_cpu(cpu)) &&
> - sched_cpu_cookie_match(cpu_rq(cpu), p))
> + sched_cpu_cookie_match(cpu_rq(cpu), p)) {
> + if (sched_feat(SIS_CACHE) &&
> + sched_clock_cpu(cpu) < cpu_rq(cpu)->cache_hot_timeout)
> + return -1;
> +
> return cpu;
> + }
>
> return -1;
> }
> @@ -7052,10 +7072,14 @@ static int select_idle_core(struct task_struct *p, int core, struct cpumask *cpu
> int cpu;
>
> for_each_cpu(cpu, cpu_smt_mask(core)) {
> - if (!available_idle_cpu(cpu)) {
> + bool cache_hot = sched_feat(SIS_CACHE) ?
> + sched_clock_cpu(cpu) < cpu_rq(cpu)->cache_hot_timeout : false;
> +
> + if (!available_idle_cpu(cpu) || cache_hot) {
> idle = false;
> if (*idle_cpu == -1) {
> - if (sched_idle_cpu(cpu) && cpumask_test_cpu(cpu, p->cpus_ptr)) {
> + if (sched_idle_cpu(cpu) && cpumask_test_cpu(cpu, p->cpus_ptr) &&
> + !cache_hot) {
> *idle_cpu = cpu;
> break;
> }
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index f770168230ae..04ed9fcf67f8 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -51,6 +51,7 @@ SCHED_FEAT(TTWU_QUEUE, true)
> */
> SCHED_FEAT(SIS_PROP, false)
> SCHED_FEAT(SIS_UTIL, true)
> +SCHED_FEAT(SIS_CACHE, true)
>
> /*
> * Issue a WARN when we do multiple update_rq_clock() calls
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 62013c49c451..7a2c12c3b6d0 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1078,6 +1078,7 @@ struct rq {
> #endif
> u64 idle_stamp;
> u64 avg_idle;
> + u64 cache_hot_timeout;
>
> unsigned long wake_stamp;
> u64 wake_avg_idle;
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
next prev parent reply other threads:[~2023-09-11 21:19 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-11 2:49 [RFC PATCH 0/2] Makes it easier for the wakee to choose previous CPU Chen Yu
2023-09-11 2:49 ` [RFC PATCH 1/2] sched/fair: Record the average sleep time of a task Chen Yu
2023-09-11 2:50 ` [RFC PATCH 2/2] sched/fair: skip the cache hot CPU in select_idle_cpu() Chen Yu
2023-09-11 7:26 ` Aaron Lu
2023-09-11 8:40 ` Chen Yu
2023-09-13 6:22 ` Gautham R. Shenoy
2023-09-13 7:25 ` Chen Yu
2023-09-14 7:06 ` Gautham R. Shenoy
2023-09-14 12:09 ` Chen Yu
2023-09-15 15:18 ` Gautham R. Shenoy
2023-09-19 9:01 ` Chen Yu
2023-09-11 8:29 ` K Prateek Nayak
2023-09-11 10:19 ` Chen Yu
2023-09-12 3:05 ` K Prateek Nayak
2023-09-12 12:32 ` Chen Yu
2023-09-12 14:26 ` K Prateek Nayak
2023-09-13 2:57 ` Chen Yu
2023-09-14 4:13 ` K Prateek Nayak
2023-09-14 11:01 ` Chen Yu
2023-09-15 3:21 ` K Prateek Nayak
2023-09-12 9:39 ` Mike Galbraith
2023-09-12 14:51 ` Chen Yu
2023-09-12 6:32 ` Mike Galbraith
2023-09-11 15:26 ` Mathieu Desnoyers [this message]
2023-09-11 15:43 ` Mathieu Desnoyers
2023-09-12 11:53 ` Chen Yu
2023-09-12 14:06 ` Mathieu Desnoyers
2023-09-12 14:14 ` Chen Yu
2023-09-12 15:18 ` Mathieu Desnoyers
2023-09-13 3:02 ` Chen Yu
2023-09-20 12:34 ` Chen Yu
2023-09-14 5:30 ` K Prateek Nayak
2023-09-14 10:43 ` Chen Yu
2023-09-15 3:37 ` K Prateek Nayak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30a7ff14-3f48-e8cf-333f-cbb7499656e3@efficios.com \
--to=mathieu.desnoyers@efficios.com \
--cc=aaron.lu@intel.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tim.c.chen@intel.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox