From: Jon Hunter <jonathanh@nvidia.com>
To: Juri Lelli <juri.lelli@redhat.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Christian Loehle <christian.loehle@arm.com>,
Thierry Reding <treding@nvidia.com>,
Waiman Long <longman@redhat.com>, Tejun Heo <tj@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Koutny <mkoutny@suse.com>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
Phil Auld <pauld@redhat.com>, Qais Yousef <qyousef@layalina.io>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
"Joel Fernandes (Google)" <joel@joelfernandes.org>,
Suleiman Souhlal <suleiman@google.com>,
Aashish Sharma <shraash@google.com>,
Shin Kawamura <kawasin@google.com>,
Vineeth Remanan Pillai <vineeth@bitbyteword.org>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>
Subject: Re: [PATCH v2 3/2] sched/deadline: Check bandwidth overflow earlier for hotplug
Date: Fri, 21 Feb 2025 11:56:14 +0000 [thread overview]
Message-ID: <1c75682e-a720-4bd0-8bcc-5443b598457f@nvidia.com> (raw)
In-Reply-To: <Z7dJe7XfG0e6ECwr@jlelli-thinkpadt14gen4.remote.csb>
On 20/02/2025 15:25, Juri Lelli wrote:
> On 20/02/25 11:40, Juri Lelli wrote:
>> On 19/02/25 19:14, Dietmar Eggemann wrote:
>
> ...
>
>> OK. CPU3 + CPU4 (CPU5 offline).
>>
>>> [ 171.003085] __dl_update() (3) cpu=2 rq->dl.extra_bw=1122848
>>> [ 171.003091] __dl_update() (3) cpu=3 rq->dl.extra_bw=1022361
>>> [ 171.003096] __dl_update() (3) cpu=4 rq->dl.extra_bw=1035468
>>> [ 171.003103] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
>>> [ 171.003113] __dl_server_attach_root() called cpu=2
>>> [ 171.003118] dl_bw_cpus() cpu=2 rd->span=0-2 cpu_active_mask=0-4 cpumask_weight(rd->span)=3 type=DYN
>>> [ 171.003127] __dl_add() tsk_bw=52428 dl_b->total_bw=157284 type=DYN rd->span=0-2
>>> [ 171.003136] __dl_update() (3) cpu=0 rq->dl.extra_bw=477111
>>> [ 171.003141] __dl_update() (3) cpu=1 rq->dl.extra_bw=851970
>>> [ 171.003147] __dl_update() (3) cpu=2 rq->dl.extra_bw=1105372
>>> [ 171.003188] root domain span: 0-2
>>> [ 171.003194] default domain span: 3-5
>>> [ 171.003220] rd 0-2: Checking EAS, schedutil is mandatory
>>> [ 171.005840] psci: CPU5 killed (polled 0 ms)
>>
>> OK. DYN has (CPU0,1,2) 157284 and DEF (CPU3,4) 104856.
>>
>> CPU4 going offline (it's isolated on DEF).
>>
>>> [ 171.006436] dl_bw_deactivate() called cpu=4
>>> [ 171.006446] __dl_bw_capacity() mask=3-5 cap=892
>>> [ 171.006454] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
>>> [ 171.006464] dl_bw_manage: cpu=4 cap=446 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=2 type=DEF span=3-5
>>> [ 171.006475] dl_bw_cpus() cpu=4 rd->span=3-5 cpu_active_mask=0-4 cpus=2 type=DEF
>>> [ 171.006485] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
>>> [ 171.006495] Hardware name: ARM Juno development board (r0) (DT)
>>> [ 171.006499] Call trace:
>>> [ 171.006502] show_stack+0x18/0x24 (C)
>>> [ 171.006514] dump_stack_lvl+0x74/0x8c
>>> [ 171.006528] dump_stack+0x18/0x24
>>> [ 171.006541] dl_bw_manage+0x3a0/0x500
>>> [ 171.006554] dl_bw_deactivate+0x40/0x50
>>> [ 171.006564] sched_cpu_deactivate+0x34/0x24c
>>> [ 171.006579] cpuhp_invoke_callback+0x138/0x694
>>> [ 171.006591] cpuhp_thread_fun+0xb0/0x198
>>> [ 171.006604] smpboot_thread_fn+0x200/0x224
>>> [ 171.006616] kthread+0x12c/0x204
>>> [ 171.006627] ret_from_fork+0x10/0x20
>>> [ 171.006639] __dl_overflow() dl_b->bw=996147 cap=446 cap_scale(dl_b->bw, cap)=433868 dl_b->total_bw=104856 old_bw=52428 new_bw=0 type=DEF rd->span=3-5
>>> [ 171.006652] dl_bw_manage() cpu=4 cap=446 overflow=0 req=0 return=0 type=DEF
>>> [ 171.006706] partition_sched_domains() called
>>> [ 171.006713] CPU: 4 UID: 0 PID: 36 Comm: cpuhp/4 Not tainted 6.13.0-09343-g9ce523149e08-dirty #172
>>> [ 171.006722] Hardware name: ARM Juno development board (r0) (DT)
>>> [ 171.006727] Call trace:
>>> [ 171.006730] show_stack+0x18/0x24 (C)
>>> [ 171.006740] dump_stack_lvl+0x74/0x8c
>>> [ 171.006754] dump_stack+0x18/0x24
>>> [ 171.006767] partition_sched_domains+0x48/0x7c
>>> [ 171.006778] sched_cpu_deactivate+0x1a8/0x24c
>>> [ 171.006792] cpuhp_invoke_callback+0x138/0x694
>>> [ 171.006805] cpuhp_thread_fun+0xb0/0x198
>>> [ 171.006817] smpboot_thread_fn+0x200/0x224
>>> [ 171.006829] kthread+0x12c/0x204
>>> [ 171.006840] ret_from_fork+0x10/0x20
>>> [ 171.006852] partition_sched_domains_locked() ndoms_new=1
>>> [ 171.006861] partition_sched_domains_locked() goto match2
>>> [ 171.006867] rd 0-2: Checking EAS, schedutil is mandatory
>>> [ 171.007774] psci: CPU4 killed (polled 4 ms)
>>
>> As I guess you were saying above, CPU4 contribution is not removed from
>> DEF.
>>
>>> [ 171.007971] dl_bw_deactivate() called cpu=3
>>> [ 171.007981] __dl_bw_capacity() mask=3-5 cap=446
>>> [ 171.007989] dl_bw_cpus() cpu=3 rd->span=3-5 cpu_active_mask=0-3 cpus=1 type=DEF
>>> [ 171.007999] dl_bw_manage: cpu=3 cap=0 fair_server_bw=52428 total_bw=104856 dl_bw_cpus=1 type=DEF span=3-5
>> ^^^^
>> And this is now wrong. :/
>
> So, CPU4 was still on DEF and we don't go through any of the accouting
> functions. I wonder if we could simplify this by always re-doing the
> accounting after root domains are stable (also for partition_
> sched_domain()). So, please take a look at what below. It can definitely
> be better encapsulated (also more cleanups are needed) and maybe it's
> just useless/stupid (hard to say here because I always see 'pass'
> whatever I try to change), but anyway. Also pushed to the usual branch.
>
> ---
> include/linux/sched/deadline.h | 4 ++++
> kernel/cgroup/cpuset.c | 13 ++++++++-----
> kernel/sched/deadline.c | 11 ++++++++---
> kernel/sched/topology.c | 1 +
> 4 files changed, 21 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
> index 3a912ab42bb5..8fc4918c6f3f 100644
> --- a/include/linux/sched/deadline.h
> +++ b/include/linux/sched/deadline.h
> @@ -34,6 +34,10 @@ static inline bool dl_time_before(u64 a, u64 b)
> struct root_domain;
> extern void dl_add_task_root_domain(struct task_struct *p);
> extern void dl_clear_root_domain(struct root_domain *rd);
> +extern void dl_clear_root_domain_cpu(int cpu);
> +
> +extern u64 dl_generation;
> +extern bool dl_bw_visited(int cpu, u64 gen);
>
> #endif /* CONFIG_SMP */
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 0f910c828973..52243dcc61ab 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -958,6 +958,8 @@ static void dl_rebuild_rd_accounting(void)
> {
> struct cpuset *cs = NULL;
> struct cgroup_subsys_state *pos_css;
> + int cpu;
> + u64 gen = ++dl_generation;
>
> lockdep_assert_held(&cpuset_mutex);
> lockdep_assert_cpus_held();
> @@ -965,11 +967,12 @@ static void dl_rebuild_rd_accounting(void)
>
> rcu_read_lock();
>
> - /*
> - * Clear default root domain DL accounting, it will be computed again
> - * if a task belongs to it.
> - */
> - dl_clear_root_domain(&def_root_domain);
> + for_each_possible_cpu(cpu) {
> + if (dl_bw_visited(cpu, gen))
> + continue;
> +
> + dl_clear_root_domain_cpu(cpu);
> + }
>
> cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 8f7420e0c9d6..a6723ed84e68 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -166,7 +166,7 @@ static inline unsigned long dl_bw_capacity(int i)
> }
> }
>
> -static inline bool dl_bw_visited(int cpu, u64 gen)
> +bool dl_bw_visited(int cpu, u64 gen)
> {
> struct root_domain *rd = cpu_rq(cpu)->rd;
>
> @@ -207,7 +207,7 @@ static inline unsigned long dl_bw_capacity(int i)
> return SCHED_CAPACITY_SCALE;
> }
>
> -static inline bool dl_bw_visited(int cpu, u64 gen)
> +bool dl_bw_visited(int cpu, u64 gen)
> {
> return false;
> }
> @@ -3037,6 +3037,11 @@ void dl_clear_root_domain(struct root_domain *rd)
> }
> }
>
> +void dl_clear_root_domain_cpu(int cpu) {
> + printk_deferred("%s: cpu=%d\n", __func__, cpu);
> + dl_clear_root_domain(cpu_rq(cpu)->rd);
> +}
> +
> #endif /* CONFIG_SMP */
>
> static void switched_from_dl(struct rq *rq, struct task_struct *p)
> @@ -3216,7 +3221,7 @@ DEFINE_SCHED_CLASS(dl) = {
> };
>
> /* Used for dl_bw check and update, used under sched_rt_handler()::mutex */
> -static u64 dl_generation;
> +u64 dl_generation;
>
> int sched_dl_global_validate(void)
> {
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index c6a140d8d851..9892e6fa3e57 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2814,5 +2814,6 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
> {
> mutex_lock(&sched_domains_mutex);
> partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
> + dl_rebuild_rd_accounting();
> mutex_unlock(&sched_domains_mutex);
> }
>
Latest branch is not building for me ...
CC kernel/time/hrtimer.o
In file included from kernel/sched/build_utility.c:88:
kernel/sched/topology.c: In function ‘partition_sched_domains’:
kernel/sched/topology.c:2817:9: error: implicit declaration of function ‘dl_rebuild_rd_accounting’ [-Werror=implicit-function-declaration]
2817 | dl_rebuild_rd_accounting();
| ^~~~~~~~~~~~~~~~~~~~~~~~
Looks like we are missing a prototype.
Jon
--
nvpublic
next prev parent reply other threads:[~2025-02-21 11:56 UTC|newest]
Thread overview: 83+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-14 14:28 [PATCH v2 0/2] Fix DEADLINE bandwidth accounting in root domain changes and hotplug Juri Lelli
2024-11-14 14:28 ` [PATCH v2 1/2] sched/deadline: Restore dl_server bandwidth on non-destructive root domain changes Juri Lelli
2024-11-14 15:56 ` Phil Auld
2024-12-02 11:14 ` [tip: sched/core] " tip-bot2 for Juri Lelli
2024-11-14 14:28 ` [PATCH v2 2/2] sched/deadline: Correctly account for allocated bandwidth during hotplug Juri Lelli
2024-11-14 15:58 ` Phil Auld
2024-12-02 11:14 ` [tip: sched/core] " tip-bot2 for Juri Lelli
2024-12-06 10:43 ` [PATCH v2 2/2] " Dan Carpenter
2024-12-09 14:20 ` Juri Lelli
2024-11-14 15:48 ` [PATCH v2 0/2] Fix DEADLINE bandwidth accounting in root domain changes and hotplug Waiman Long
2024-11-14 16:14 ` Juri Lelli
2024-11-14 18:16 ` Waiman Long
2024-11-14 18:43 ` Phil Auld
2024-11-15 11:48 ` [PATCH v2 3/2] sched/deadline: Check bandwidth overflow earlier for hotplug Juri Lelli
2024-12-02 11:14 ` [tip: sched/core] " tip-bot2 for Juri Lelli
2025-01-10 11:52 ` [PATCH v2 3/2] " Jon Hunter
2025-01-10 15:45 ` Juri Lelli
2025-01-10 18:40 ` Jon Hunter
2025-01-13 9:32 ` Juri Lelli
2025-01-13 13:53 ` Jon Hunter
2025-01-14 13:52 ` Jon Hunter
2025-01-14 14:02 ` Juri Lelli
2025-01-15 16:10 ` Juri Lelli
2025-01-16 13:14 ` Jon Hunter
2025-01-16 15:55 ` Juri Lelli
2025-02-03 11:01 ` Jon Hunter
2025-02-04 17:26 ` Juri Lelli
2025-02-05 6:53 ` Juri Lelli
2025-02-05 10:12 ` Juri Lelli
2025-02-05 16:56 ` Jon Hunter
2025-02-06 9:29 ` Juri Lelli
2025-02-07 10:38 ` Jon Hunter
2025-02-07 13:38 ` Dietmar Eggemann
2025-02-07 14:04 ` Jon Hunter
2025-02-07 15:55 ` Christian Loehle
2025-02-10 17:09 ` Juri Lelli
2025-02-11 8:36 ` Dietmar Eggemann
2025-02-11 9:21 ` Juri Lelli
2025-02-11 10:43 ` Dietmar Eggemann
2025-02-11 10:15 ` Christian Loehle
2025-02-11 10:42 ` Juri Lelli
2025-02-12 18:22 ` Dietmar Eggemann
2025-02-13 6:20 ` Juri Lelli
2025-02-13 12:27 ` Christian Loehle
2025-02-13 13:33 ` Juri Lelli
2025-02-13 13:38 ` Christian Loehle
2025-02-13 14:51 ` Juri Lelli
2025-02-13 14:57 ` Christian Loehle
2025-02-16 16:33 ` Qais Yousef
2025-02-17 14:52 ` Juri Lelli
2025-02-22 23:59 ` Qais Yousef
2025-02-24 9:27 ` Juri Lelli
2025-02-25 0:02 ` Qais Yousef
2025-02-25 9:46 ` Juri Lelli
2025-02-25 10:09 ` Christian Loehle
2025-02-12 23:01 ` Jon Hunter
2025-02-13 6:16 ` Juri Lelli
2025-02-13 9:53 ` Jon Hunter
2025-02-14 10:05 ` Jon Hunter
2025-02-17 16:08 ` Juri Lelli
2025-02-17 16:10 ` Jon Hunter
2025-02-17 16:25 ` Juri Lelli
2025-02-18 9:58 ` Juri Lelli
2025-02-18 10:30 ` Juri Lelli
2025-02-18 14:12 ` Dietmar Eggemann
2025-02-18 14:18 ` Juri Lelli
2025-02-19 9:29 ` Dietmar Eggemann
2025-02-19 10:02 ` Juri Lelli
2025-02-19 11:23 ` Jon Hunter
2025-02-19 13:09 ` Dietmar Eggemann
2025-02-19 18:14 ` Dietmar Eggemann
2025-02-20 10:40 ` Juri Lelli
2025-02-20 15:25 ` Juri Lelli
2025-02-21 11:56 ` Jon Hunter [this message]
2025-02-21 14:45 ` Dietmar Eggemann
2025-02-24 13:53 ` Dietmar Eggemann
2025-02-24 14:03 ` Juri Lelli
2025-02-24 23:39 ` Jon Hunter
2025-02-25 9:48 ` Juri Lelli
2025-03-03 14:17 ` Jon Hunter
2025-03-03 16:00 ` Juri Lelli
2025-02-07 14:04 ` Jon Hunter
2025-02-07 15:52 ` Juri Lelli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1c75682e-a720-4bd0-8bcc-5443b598457f@nvidia.com \
--to=jonathanh@nvidia.com \
--cc=bigeasy@linutronix.de \
--cc=bsegall@google.com \
--cc=cgroups@vger.kernel.org \
--cc=christian.loehle@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=hannes@cmpxchg.org \
--cc=joel@joelfernandes.org \
--cc=juri.lelli@redhat.com \
--cc=kawasin@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=mkoutny@suse.com \
--cc=pauld@redhat.com \
--cc=peterz@infradead.org \
--cc=qyousef@layalina.io \
--cc=rostedt@goodmis.org \
--cc=shraash@google.com \
--cc=suleiman@google.com \
--cc=tj@kernel.org \
--cc=treding@nvidia.com \
--cc=vincent.guittot@linaro.org \
--cc=vineeth@bitbyteword.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox