public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Shrikanth Hegde <sshegde@linux.ibm.com>
To: linux-kernel@vger.kernel.org, mingo@kernel.org,
	peterz@infradead.org, juri.lelli@redhat.com,
	vincent.guittot@linaro.org, tglx@linutronix.de,
	yury.norov@gmail.com, gregkh@linuxfoundation.org
Cc: sshegde@linux.ibm.com, pbonzini@redhat.com, seanjc@google.com,
	kprateek.nayak@amd.com, vschneid@redhat.com, iii@linux.ibm.com,
	huschle@linux.ibm.com, rostedt@goodmis.org,
	dietmar.eggemann@arm.com, mgorman@suse.de, bsegall@google.com,
	maddy@linux.ibm.com, srikar@linux.ibm.com, hdanton@sina.com,
	chleroy@kernel.org, vineeth@bitbyteword.org,
	joelagnelf@nvidia.com
Subject: [PATCH v2 15/17] sched/core: Handle steal values and mark CPUs as preferred
Date: Wed,  8 Apr 2026 00:49:48 +0530	[thread overview]
Message-ID: <20260407191950.643549-16-sshegde@linux.ibm.com> (raw)
In-Reply-To: <20260407191950.643549-1-sshegde@linux.ibm.com>

This is the main periodic work which handles the steal time values.

- Compute the steal time by looking CPUTIME_STEAL across all online CPUs

- Compute steal ratio. It is multiplied by 100 to handle the fractional
  values.

- If the steal time higher than threshold, reduce the number of preferred
  CPUs by 1 core. The last core in the intersection of online and 
  preferred CPUs will be marked as non-preferred.
  Ensure at least one core is left as preferred always.

- If the steal time lower than threshold, increase the number of preferred
  CPUs by 1 core. First online core which is not in cpu_preferred_mask will
  be marked as preferred.
  If all cores are aleady set to preferred, bail out.

Increase/Decrease may need to modify the splicing across NUMA nodes. It is
being kept simple for now.

Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 kernel/sched/core.c | 52 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1c6fcf1ae4fe..6e2b733adf45 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -11349,15 +11349,65 @@ void sched_init_steal_monitor(void)
 	steal_mon.sampling_period_ms  = 1000;		/* once per second */
 }
 
-/* This is only a skeleton. Subsequent patches introduce more of it */
 void sched_steal_detection_work(struct work_struct *work)
 {
 	struct steal_monitor_t *sm = container_of(work, struct steal_monitor_t, work);
+	int this_cpu = raw_smp_processor_id();
+	u64 delta_steal, delta_ns, steal = 0;
+	u64 steal_ratio;
 	ktime_t now;
+	int tmp_cpu;
+
+	for_each_cpu(tmp_cpu, cpu_online_mask)
+		steal += kcpustat_cpu(tmp_cpu).cpustat[CPUTIME_STEAL];
 
 	/* Update the prev_time for next iteration*/
 	now = ktime_get();
+	delta_steal = steal > sm->prev_steal ? steal - sm->prev_steal : 0;
+	delta_ns = max_t(u64, ktime_to_ns(ktime_sub(now, sm->prev_time)), 1);
+
 	sm->prev_time = now;
+	sm->prev_steal = steal;
+
+#ifdef CONFIG_SCHED_SMT
+	/* Multiply by 100 to consider the fractional values of steal time */
+	steal_ratio = (delta_steal * 100 * 100) / (delta_ns * num_online_cpus());
+
+	/* If the steal time values are high, reduce one core from preferred CPUs */
+	if (steal_ratio > sm->high_threshold) {
+		int last_cpu;
+
+		cpumask_and(sm->tmp_mask, cpu_online_mask, cpu_preferred_mask);
+		last_cpu = cpumask_last(sm->tmp_mask);
+
+		/*
+		 * If the core belongs to the housekeeping CPUs, no action is
+		 * taken. This leaves at least one core preferred always.
+		 * This ensures at least some CPUs are available to run
+		 */
+		if (cpumask_equal(cpu_smt_mask(last_cpu), cpu_smt_mask(this_cpu)))
+			return;
+
+		for_each_cpu(tmp_cpu, cpu_smt_mask(last_cpu)) {
+			set_cpu_preferred(tmp_cpu, false);
+			if (tick_nohz_full_cpu(tmp_cpu))
+				tick_nohz_dep_set_cpu(tmp_cpu, TICK_DEP_BIT_SCHED);
+		}
+	}
+
+	/* If the steal time values are low, increase one core as preferred CPUs */
+	if (steal_ratio < sm->low_threshold) {
+		int first_cpu;
+
+		first_cpu = cpumask_first_andnot(cpu_online_mask, cpu_preferred_mask);
+		/* All CPUs are preferred. Nothing to increase further */
+		if (first_cpu >= nr_cpu_ids)
+			return;
+
+		for_each_cpu(tmp_cpu, cpu_smt_mask(first_cpu))
+			set_cpu_preferred(tmp_cpu, true);
+	}
+#endif
 }
 
 void sched_trigger_steal_computation(int cpu)
-- 
2.47.3


  parent reply	other threads:[~2026-04-07 19:21 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-07 19:19 [PATCH v2 00/17] sched/paravirt: Introduce cpu_preferred_mask and steal-driven vCPU backoff Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 01/17] sched/debug: Remove unused schedstats Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 02/17] sched/docs: Document cpu_preferred_mask and Preferred CPU concept Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 03/17] cpumask: Introduce cpu_preferred_mask Shrikanth Hegde
2026-04-07 20:27   ` Yury Norov
2026-04-08  9:16     ` Shrikanth Hegde
2026-04-08 17:57       ` Yury Norov
2026-04-07 19:19 ` [PATCH v2 04/17] sysfs: Add preferred CPU file Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 05/17] sched/core: allow only preferred CPUs in is_cpu_allowed Shrikanth Hegde
2026-04-08  1:05   ` Yury Norov
2026-04-08 12:56     ` Shrikanth Hegde
2026-04-08 18:09       ` Yury Norov
2026-04-07 19:19 ` [PATCH v2 06/17] sched/fair: Select preferred CPU at wakeup when possible Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 07/17] sched/fair: load balance only among preferred CPUs Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 08/17] sched/rt: Select a preferred CPU for wakeup and pulling rt task Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 09/17] sched/core: Keep tick on non-preferred CPUs until tasks are out Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 10/17] sched/core: Push current task from non preferred CPU Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 11/17] sched/debug: Add migration stats due to non preferred CPUs Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 12/17] sched/feature: Add STEAL_MONITOR feature Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 13/17] sched/core: Introduce a simple steal monitor Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 14/17] sched/core: Compute steal values at regular intervals Shrikanth Hegde
2026-04-07 19:19 ` Shrikanth Hegde [this message]
2026-04-07 19:19 ` [PATCH v2 16/17] sched/core: Mark the direction of steal values to avoid oscillations Shrikanth Hegde
2026-04-07 19:19 ` [PATCH v2 17/17] sched/debug: Add debug knobs for steal monitor Shrikanth Hegde
2026-04-07 19:50 ` [PATCH v2 00/17] sched/paravirt: Introduce cpu_preferred_mask and steal-driven vCPU backoff Shrikanth Hegde
2026-04-08 10:14 ` Hillf Danton
2026-04-08 13:49   ` Shrikanth Hegde
2026-04-09  5:15     ` Hillf Danton
2026-04-09 10:27       ` Shrikanth Hegde

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260407191950.643549-16-sshegde@linux.ibm.com \
    --to=sshegde@linux.ibm.com \
    --cc=bsegall@google.com \
    --cc=chleroy@kernel.org \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hdanton@sina.com \
    --cc=huschle@linux.ibm.com \
    --cc=iii@linux.ibm.com \
    --cc=joelagnelf@nvidia.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maddy@linux.ibm.com \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=seanjc@google.com \
    --cc=srikar@linux.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=vineeth@bitbyteword.org \
    --cc=vschneid@redhat.com \
    --cc=yury.norov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox