From: Peter Zijlstra <peterz@infradead.org>
To: Srikar Dronamraju <srikar@linux.ibm.com>
Cc: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
Ben Segall <bsegall@google.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Ingo Molnar <mingo@kernel.org>,
Juri Lelli <juri.lelli@redhat.com>,
K Prateek Nayak <kprateek.nayak@amd.com>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Mel Gorman <mgorman@suse.de>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Shrikanth Hegde <sshegde@linux.ibm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Swapnil Sapkal <swapnil.sapkal@amd.com>,
Thomas Huth <thuth@redhat.com>,
Valentin Schneider <vschneid@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
virtualization@lists.linux.dev,
Yicong Yang <yangyicong@hisilicon.com>,
Ilya Leoshkevich <iii@linux.ibm.com>
Subject: Re: [PATCH 08/17] sched/core: Implement CPU soft offline/online
Date: Fri, 5 Dec 2025 17:03:26 +0100 [thread overview]
Message-ID: <20251205160326.GF2528459@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <20251204175405.1511340-9-srikar@linux.ibm.com>
On Thu, Dec 04, 2025 at 11:23:56PM +0530, Srikar Dronamraju wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 89efff1e1ead..f66fd1e925b0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -8177,13 +8177,16 @@ static void balance_push(struct rq *rq)
> * Only active while going offline and when invoked on the outgoing
> * CPU.
> */
> - if (!cpu_dying(rq->cpu) || rq != this_rq())
> + if (cpu_active(rq->cpu) || rq != this_rq())
> return;
>
> /*
> - * Ensure the thing is persistent until balance_push_set(.on = false);
> + * Unless soft-offline, Ensure the thing is persistent until
> + * balance_push_set(.on = false); In case of soft-offline, just
> + * enough to push current non-pinned tasks out.
> */
> - rq->balance_callback = &balance_push_callback;
> + if (cpu_dying(rq->cpu) || rq->nr_running)
> + rq->balance_callback = &balance_push_callback;
>
> /*
> * Both the cpu-hotplug and stop task are in this case and are
> @@ -8392,6 +8395,8 @@ static inline void sched_smt_present_dec(int cpu)
> #endif
> }
>
> +static struct cpumask cpu_softoffline_mask;
> +
> int sched_cpu_activate(unsigned int cpu)
> {
> struct rq *rq = cpu_rq(cpu);
> @@ -8411,7 +8416,10 @@ int sched_cpu_activate(unsigned int cpu)
> if (sched_smp_initialized) {
> sched_update_numa(cpu, true);
> sched_domains_numa_masks_set(cpu);
> - cpuset_cpu_active();
> +
> + /* For CPU soft-offline, dont need to rebuild sched-domains */
> + if (!cpumask_test_cpu(cpu, &cpu_softoffline_mask))
> + cpuset_cpu_active();
> }
>
> scx_rq_activate(rq);
> @@ -8485,7 +8493,11 @@ int sched_cpu_deactivate(unsigned int cpu)
> return 0;
>
> sched_update_numa(cpu, false);
> - cpuset_cpu_inactive(cpu);
> +
> + /* For CPU soft-offline, dont need to rebuild sched-domains */
> + if (!cpumask_test_cpu(cpu, &cpu_softoffline_mask))
> + cpuset_cpu_inactive(cpu);
> +
> sched_domains_numa_masks_clear(cpu);
> return 0;
> }
> @@ -10928,3 +10940,25 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx)
> set_next_task(rq, ctx->p);
> }
> #endif /* CONFIG_SCHED_CLASS_EXT */
> +
> +void set_cpu_softoffline(int cpu, bool soft_offline)
> +{
> + struct sched_domain *sd;
> +
> + if (!cpu_online(cpu))
> + return;
> +
> + cpumask_set_cpu(cpu, &cpu_softoffline_mask);
> +
> + rcu_read_lock();
> + for_each_domain(cpu, sd)
> + update_group_capacity(sd, cpu);
> + rcu_read_unlock();
> +
> + if (soft_offline)
> + sched_cpu_deactivate(cpu);
> + else
> + sched_cpu_activate(cpu);
> +
> + cpumask_clear_cpu(cpu, &cpu_softoffline_mask);
> +}
What happens if you then offline one of these softoffline CPUs? Doesn't
that do sched_cpu_deactivate() again?
Also, the way this seems to use softoffline_mask is as a hidden argument
to sched_cpu_{de,}activate() instead of as an actual mask.
Moreover, there does not seem to be any sort of serialization vs
concurrent set_cpu_softoffline() callers. At the very least
update_group_capacity() would end up with indeterminate results.
This all doesn't look 'robust'.
next prev parent reply other threads:[~2025-12-05 16:03 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-04 17:53 [PATCH 00/17] Steal time based dynamic CPU resource management Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 01/17] sched/fair: Enable group_asym_packing in find_idlest_group Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 02/17] powerpc/lpar: Reorder steal accounting calculation Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 03/17] pseries/lpar: Process steal metrics Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 04/17] powerpc/smp: Add num_available_cores callback for smp_ops Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 05/17] pseries/smp: Query and set entitlements Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 06/17] powerpc/smp: Delay processing steal time at boot Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 07/17] sched/core: Set balance_callback only if CPU is dying Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 08/17] sched/core: Implement CPU soft offline/online Srikar Dronamraju
2025-12-05 16:03 ` Peter Zijlstra [this message]
2025-12-05 18:54 ` Srikar Dronamraju
2025-12-05 16:07 ` Peter Zijlstra
2025-12-05 18:57 ` Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 09/17] powerpc/smp: Implement arch_scale_cpu_capacity for shared LPARs Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 10/17] powerpc/smp: Define arch_update_cpu_topology " Srikar Dronamraju
2025-12-04 17:53 ` [PATCH 11/17] pseries/smp: Create soft offline infrastructure for Powerpc " Srikar Dronamraju
2025-12-04 17:54 ` [PATCH 12/17] pseries/smp: Trigger softoffline based on steal metrics Srikar Dronamraju
2025-12-04 17:54 ` [PATCH 13/17] pseries/smp: Account cores when triggering softoffline Srikar Dronamraju
2025-12-04 17:54 ` [PATCH 14/17] powerpc/smp: Assume preempt if CPU is inactive Srikar Dronamraju
2025-12-04 17:54 ` [PATCH 15/17] pseries/hotplug: Update available_cores on a dlpar event Srikar Dronamraju
2025-12-04 17:54 ` [PATCH 16/17] pseries/smp: Allow users to override steal thresholds Srikar Dronamraju
2025-12-04 17:54 ` [PATCH 17/17] pseries/lpar: Add debug interface to set steal interval Srikar Dronamraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251205160326.GF2528459@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=bsegall@google.com \
--cc=christophe.leroy@csgroup.eu \
--cc=dietmar.eggemann@arm.com \
--cc=iii@linux.ibm.com \
--cc=juri.lelli@redhat.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.ibm.com \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=mpe@ellerman.id.au \
--cc=npiggin@gmail.com \
--cc=rostedt@goodmis.org \
--cc=srikar@linux.ibm.com \
--cc=sshegde@linux.ibm.com \
--cc=swapnil.sapkal@amd.com \
--cc=thuth@redhat.com \
--cc=vincent.guittot@linaro.org \
--cc=virtualization@lists.linux.dev \
--cc=vschneid@redhat.com \
--cc=yangyicong@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).