From: Tejun Heo <tj@kernel.org>
To: torvalds@linux-foundation.org, mingo@redhat.com,
peterz@infradead.org, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
bristot@redhat.com, vschneid@redhat.com, ast@kernel.org,
daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org,
joshdon@google.com, brho@google.com, pjt@google.com,
derkling@google.com, haoluo@google.com, dvernet@meta.com,
dschatzberg@meta.com, dskarlat@cs.cmu.edu, riel@surriel.com,
changwoo@igalia.com, himadrics@inria.fr, memxor@gmail.com,
andrea.righi@canonical.com, joel@joelfernandes.org
Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org,
kernel-team@meta.com, Tejun Heo <tj@kernel.org>
Subject: [PATCH 06/30] sched: Factor out update_other_load_avgs() from __update_blocked_others()
Date: Tue, 18 Jun 2024 11:17:21 -1000 [thread overview]
Message-ID: <20240618212056.2833381-7-tj@kernel.org> (raw)
In-Reply-To: <20240618212056.2833381-1-tj@kernel.org>
RT, DL, thermal and irq load and utilization metrics need to be decayed and
updated periodically and before consumption to keep the numbers reasonable.
This is currently done from __update_blocked_others() as a part of the fair
class load balance path. Let's factor it out to update_other_load_avgs().
Pure refactor. No functional changes.
This will be used by the new BPF extensible scheduling class to ensure that
the above metrics are properly maintained.
v2: Refreshed on top of tip:sched/core.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: David Vernet <dvernet@meta.com>
---
kernel/sched/fair.c | 16 +++-------------
kernel/sched/sched.h | 4 ++++
kernel/sched/syscalls.c | 19 +++++++++++++++++++
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 18ecd4f908e4..715d7c1f55df 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9352,28 +9352,18 @@ static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {
static bool __update_blocked_others(struct rq *rq, bool *done)
{
- const struct sched_class *curr_class;
- u64 now = rq_clock_pelt(rq);
- unsigned long hw_pressure;
- bool decayed;
+ bool updated;
/*
* update_load_avg() can call cpufreq_update_util(). Make sure that RT,
* DL and IRQ signals have been updated before updating CFS.
*/
- curr_class = rq->curr->sched_class;
-
- hw_pressure = arch_scale_hw_pressure(cpu_of(rq));
-
- decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
- update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
- update_hw_load_avg(now, rq, hw_pressure) |
- update_irq_load_avg(rq, 0);
+ updated = update_other_load_avgs(rq);
if (others_have_blocked(rq))
*done = false;
- return decayed;
+ return updated;
}
#ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 656a63c0d393..a5a4f59151db 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3074,6 +3074,8 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) { }
#ifdef CONFIG_SMP
+bool update_other_load_avgs(struct rq *rq);
+
unsigned long effective_cpu_util(int cpu, unsigned long util_cfs,
unsigned long *min,
unsigned long *max);
@@ -3117,6 +3119,8 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
return READ_ONCE(rq->avg_rt.util_avg);
}
+#else /* !CONFIG_SMP */
+static inline bool update_other_load_avgs(struct rq *rq) { return false; }
#endif /* CONFIG_SMP */
#ifdef CONFIG_UCLAMP_TASK
diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index cf189bc3dd18..050215ef8fa4 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -259,6 +259,25 @@ int sched_core_idle_cpu(int cpu)
#endif
#ifdef CONFIG_SMP
+/*
+ * Load avg and utiliztion metrics need to be updated periodically and before
+ * consumption. This function updates the metrics for all subsystems except for
+ * the fair class. @rq must be locked and have its clock updated.
+ */
+bool update_other_load_avgs(struct rq *rq)
+{
+ u64 now = rq_clock_pelt(rq);
+ const struct sched_class *curr_class = rq->curr->sched_class;
+ unsigned long hw_pressure = arch_scale_hw_pressure(cpu_of(rq));
+
+ lockdep_assert_rq_held(rq);
+
+ return update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
+ update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
+ update_hw_load_avg(now, rq, hw_pressure) |
+ update_irq_load_avg(rq, 0);
+}
+
/*
* This function computes an effective utilization for the given CPU, to be
* used for frequency selection given the linear relation: f = u * f_max.
--
2.45.2
next prev parent reply other threads:[~2024-06-18 21:21 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-18 21:17 [PATCHSET v7] sched: Implement BPF extensible scheduler class Tejun Heo
2024-06-18 21:17 ` [PATCH 01/30] sched: Restructure sched_class order sanity checks in sched_init() Tejun Heo
2024-06-18 21:17 ` [PATCH 02/30] sched: Allow sched_cgroup_fork() to fail and introduce sched_cancel_fork() Tejun Heo
2024-06-18 21:17 ` [PATCH 03/30] sched: Add sched_class->reweight_task() Tejun Heo
2024-06-18 21:17 ` [PATCH 04/30] sched: Add sched_class->switching_to() and expose check_class_changing/changed() Tejun Heo
2024-06-21 16:53 ` Phil Auld
2024-06-21 19:18 ` Tejun Heo
2024-06-21 19:32 ` Phil Auld
2024-06-18 21:17 ` [PATCH 05/30] sched: Factor out cgroup weight conversion functions Tejun Heo
2024-06-18 21:17 ` Tejun Heo [this message]
2024-06-18 21:17 ` [PATCH 07/30] sched: Add normal_policy() Tejun Heo
2024-06-18 21:17 ` [PATCH 08/30] sched_ext: Add boilerplate for extensible scheduler class Tejun Heo
2024-06-18 21:17 ` [PATCH 09/30] sched_ext: Implement BPF " Tejun Heo
2024-06-27 8:32 ` Andrea Righi
2024-08-07 19:11 ` Phil Auld
2024-08-07 19:26 ` Tejun Heo
2024-08-07 21:04 ` Phil Auld
2024-06-18 21:17 ` [PATCH 10/30] sched_ext: Add scx_simple and scx_example_qmap example schedulers Tejun Heo
2024-06-24 10:53 ` Jon Hunter
2024-06-25 1:58 ` [PATCH sched_ext/for-6.11] sched_ext: Drop tools_clean target from the top-level Makefile Tejun Heo
2024-06-25 10:17 ` Jon Hunter
2024-06-18 21:17 ` [PATCH 11/30] sched_ext: Add sysrq-S which disables the BPF scheduler Tejun Heo
2024-06-18 21:17 ` [PATCH 12/30] sched_ext: Implement runnable task stall watchdog Tejun Heo
2024-06-18 21:17 ` [PATCH 13/30] sched_ext: Allow BPF schedulers to disallow specific tasks from joining SCHED_EXT Tejun Heo
2024-06-18 21:17 ` [PATCH 14/30] sched_ext: Print sched_ext info when dumping stack Tejun Heo
2024-06-18 21:17 ` [PATCH 15/30] sched_ext: Print debug dump after an error exit Tejun Heo
2024-06-18 21:17 ` [PATCH 16/30] tools/sched_ext: Add scx_show_state.py Tejun Heo
2024-06-18 21:17 ` [PATCH 17/30] sched_ext: Implement scx_bpf_kick_cpu() and task preemption support Tejun Heo
2024-06-18 21:17 ` [PATCH 18/30] sched_ext: Add a central scheduler which makes all scheduling decisions on one CPU Tejun Heo
2024-06-18 21:17 ` [PATCH 19/30] sched_ext: Make watchdog handle ops.dispatch() looping stall Tejun Heo
2024-06-18 21:17 ` [PATCH 20/30] sched_ext: Add task state tracking operations Tejun Heo
2024-06-18 21:17 ` [PATCH 21/30] sched_ext: Implement tickless support Tejun Heo
2024-06-18 21:17 ` [PATCH 22/30] sched_ext: Track tasks that are subjects of the in-flight SCX operation Tejun Heo
2024-06-18 21:17 ` [PATCH 23/30] sched_ext: Implement SCX_KICK_WAIT Tejun Heo
2024-06-18 21:17 ` [PATCH 24/30] sched_ext: Implement sched_ext_ops.cpu_acquire/release() Tejun Heo
2024-06-18 21:17 ` [PATCH 25/30] sched_ext: Implement sched_ext_ops.cpu_online/offline() Tejun Heo
2024-06-18 21:17 ` [PATCH 26/30] sched_ext: Bypass BPF scheduler while PM events are in progress Tejun Heo
2024-06-18 21:17 ` [PATCH 27/30] sched_ext: Implement core-sched support Tejun Heo
2024-06-18 21:17 ` [PATCH 28/30] sched_ext: Add vtime-ordered priority queue to dispatch_q's Tejun Heo
2024-06-18 21:17 ` [PATCH 29/30] sched_ext: Documentation: scheduler: Document extensible scheduler class Tejun Heo
2024-06-19 12:11 ` Bagas Sanjaya
2024-06-18 21:17 ` [PATCH 30/30] sched_ext: Add selftests Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240618212056.2833381-7-tj@kernel.org \
--to=tj@kernel.org \
--cc=andrea.righi@canonical.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=brho@google.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=changwoo@igalia.com \
--cc=daniel@iogearbox.net \
--cc=derkling@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=dschatzberg@meta.com \
--cc=dskarlat@cs.cmu.edu \
--cc=dvernet@meta.com \
--cc=haoluo@google.com \
--cc=himadrics@inria.fr \
--cc=joel@joelfernandes.org \
--cc=joshdon@google.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=martin.lau@kernel.org \
--cc=memxor@gmail.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=riel@surriel.com \
--cc=rostedt@goodmis.org \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox