public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] sched: Misc cleanups
@ 2025-12-01  6:46 Ingo Molnar
  2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
                   ` (5 more replies)
  0 siblings, 6 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

Misc details I noticed while working on the fair scheduling code.

Arguably sum_w_vruntime is a bit of a tongue-twister - any better ideas?

The latest version of this series can be found at:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/tip.git WIP.sched/misc

The commits are based on tip:sched/core.

Thanks,

	Ingo

================
Ingo Molnar (6):
  sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks
  sched/fair: Clean up comments in 'struct cfs_rq'
  sched/fair: Separate se->vlag from se->vprot
  sched/fair: Rename avg_vruntime() to cfs_avg_vruntime()
  sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
  sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions

 include/linux/sched.h |  4 ++--
 kernel/sched/debug.c  |  4 ++--
 kernel/sched/fair.c   | 48 ++++++++++++++++++++++++------------------------
 kernel/sched/sched.h  | 28 +++++++++++++---------------
 4 files changed, 41 insertions(+), 43 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks
  2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
@ 2025-12-01  6:46 ` Ingo Molnar
  2025-12-04  5:53   ` Shrikanth Hegde
                     ` (2 more replies)
  2025-12-01  6:46 ` [PATCH 2/6] sched/fair: Clean up comments in 'struct cfs_rq' Ingo Molnar
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

Join two identical #ifdef blocks:

  #ifdef CONFIG_CFS_BANDWIDTH
  ...
  #endif

  #ifdef CONFIG_CFS_BANDWIDTH
  ...
  #endif

Also mark nested #ifdef blocks in the usual fashion, to make
it more apparent where in a nested hierarchy of #ifdefs we
are at a glance.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/sched.h | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b419a4d98461..a29965c93832 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -726,9 +726,7 @@ struct cfs_rq {
 	unsigned long		h_load;
 	u64			last_h_load_update;
 	struct sched_entity	*h_load_next;
-#endif /* CONFIG_FAIR_GROUP_SCHED */
 
-#ifdef CONFIG_FAIR_GROUP_SCHED
 	struct rq		*rq;	/* CPU runqueue to which this cfs_rq is attached */
 
 	/*
@@ -746,14 +744,14 @@ struct cfs_rq {
 	/* Locally cached copy of our task_group's idle value */
 	int			idle;
 
-#ifdef CONFIG_CFS_BANDWIDTH
+# ifdef CONFIG_CFS_BANDWIDTH
 	int			runtime_enabled;
 	s64			runtime_remaining;
 
 	u64			throttled_pelt_idle;
-#ifndef CONFIG_64BIT
+#  ifndef CONFIG_64BIT
 	u64                     throttled_pelt_idle_copy;
-#endif
+#  endif
 	u64			throttled_clock;
 	u64			throttled_clock_pelt;
 	u64			throttled_clock_pelt_time;
@@ -765,7 +763,7 @@ struct cfs_rq {
 	struct list_head	throttled_list;
 	struct list_head	throttled_csd_list;
 	struct list_head        throttled_limbo_list;
-#endif /* CONFIG_CFS_BANDWIDTH */
+# endif /* CONFIG_CFS_BANDWIDTH */
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 };
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/6] sched/fair: Clean up comments in 'struct cfs_rq'
  2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
  2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
@ 2025-12-01  6:46 ` Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2025-12-01  6:46 ` [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot Ingo Molnar
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

 - Fix vertical alignment
 - Fix typos
 - Fix capitalization

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/sched.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a29965c93832..6bfcf52a4840 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -670,13 +670,13 @@ struct balance_callback {
 	void (*func)(struct rq *rq);
 };
 
-/* CFS-related fields in a runqueue */
+/* Fair scheduling SCHED_{NORMAL,BATCH,IDLE} related fields in a runqueue: */
 struct cfs_rq {
 	struct load_weight	load;
 	unsigned int		nr_queued;
-	unsigned int		h_nr_queued;       /* SCHED_{NORMAL,BATCH,IDLE} */
-	unsigned int		h_nr_runnable;     /* SCHED_{NORMAL,BATCH,IDLE} */
-	unsigned int		h_nr_idle; /* SCHED_IDLE */
+	unsigned int		h_nr_queued;		/* SCHED_{NORMAL,BATCH,IDLE} */
+	unsigned int		h_nr_runnable;		/* SCHED_{NORMAL,BATCH,IDLE} */
+	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
 	s64			avg_vruntime;
 	u64			avg_load;
@@ -690,7 +690,7 @@ struct cfs_rq {
 	struct rb_root_cached	tasks_timeline;
 
 	/*
-	 * 'curr' points to currently running entity on this cfs_rq.
+	 * 'curr' points to the currently running entity on this cfs_rq.
 	 * It is set to NULL otherwise (i.e when none are currently running).
 	 */
 	struct sched_entity	*curr;
@@ -739,7 +739,7 @@ struct cfs_rq {
 	 */
 	int			on_list;
 	struct list_head	leaf_cfs_rq_list;
-	struct task_group	*tg;	/* group that "owns" this runqueue */
+	struct task_group	*tg;	/* Group that "owns" this runqueue */
 
 	/* Locally cached copy of our task_group's idle value */
 	int			idle;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot
  2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
  2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
  2025-12-01  6:46 ` [PATCH 2/6] sched/fair: Clean up comments in 'struct cfs_rq' Ingo Molnar
@ 2025-12-01  6:46 ` Ingo Molnar
  2025-12-01  8:06   ` [PATCH 3/6 v2] " Ingo Molnar
                     ` (2 more replies)
  2025-12-01  6:46 ` [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime() Ingo Molnar
                   ` (2 subsequent siblings)
  5 siblings, 3 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

There's no real space concerns here and keeping these fields
in a union makes reading (and tracing) the scheduler code harder.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e84bc5bce816..667fa08aee75 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -586,7 +586,7 @@ struct sched_entity {
 	u64				sum_exec_runtime;
 	u64				prev_sum_exec_runtime;
 	u64				vruntime;
-	union {
+//	union {
 		/*
 		 * When !@on_rq this field is vlag.
 		 * When cfs_rq->curr == se (which implies @on_rq)
@@ -594,7 +594,7 @@ struct sched_entity {
 		 */
 		s64                     vlag;
 		u64                     vprot;
-	};
+//	};
 	u64				slice;
 
 	u64				nr_migrations;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime()
  2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
                   ` (2 preceding siblings ...)
  2025-12-01  6:46 ` [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot Ingo Molnar
@ 2025-12-01  6:46 ` Ingo Molnar
  2025-12-02 10:24   ` Peter Zijlstra
  2025-12-01  6:46 ` [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight Ingo Molnar
  2025-12-01  6:46 ` [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions Ingo Molnar
  5 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

Since the unit of the ->avg_vruntime field isn't actually
the same thing as the avg_vruntime() result, reduce confusion
and rename the latter to the common cfs_*() nomenclature of
visible global functions of the fair scheduler.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/debug.c |  2 +-
 kernel/sched/fair.c  | 10 +++++-----
 kernel/sched/sched.h |  2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 41caa22e0680..a6ceda12bd35 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -829,7 +829,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "zero_vruntime",
 			SPLIT_NS(zero_vruntime));
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "avg_vruntime",
-			SPLIT_NS(avg_vruntime(cfs_rq)));
+			SPLIT_NS(cfs_avg_vruntime(cfs_rq)));
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "right_vruntime",
 			SPLIT_NS(right_vruntime));
 	spread = right_vruntime - left_vruntime;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 769d7b7990df..3d6d551168aa 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -651,7 +651,7 @@ void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
  * Specifically: avg_runtime() + 0 must result in entity_eligible() := true
  * For this to be so, the result of this function must have a left bias.
  */
-u64 avg_vruntime(struct cfs_rq *cfs_rq)
+u64 cfs_avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
@@ -696,7 +696,7 @@ static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
 
 	WARN_ON_ONCE(!se->on_rq);
 
-	vlag = avg_vruntime(cfs_rq) - se->vruntime;
+	vlag = cfs_avg_vruntime(cfs_rq) - se->vruntime;
 	limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se);
 
 	se->vlag = clamp(vlag, -limit, limit);
@@ -716,7 +716,7 @@ static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
  *
  * lag_i >= 0 -> \Sum (v_i - v)*w_i >= (v_i - v)*(\Sum w_i)
  *
- * Note: using 'avg_vruntime() > se->vruntime' is inaccurate due
+ * Note: using 'cfs_avg_vruntime() > se->vruntime' is inaccurate due
  *       to the loss in precision caused by the division.
  */
 static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
@@ -742,7 +742,7 @@ int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se)
 
 static void update_zero_vruntime(struct cfs_rq *cfs_rq)
 {
-	u64 vruntime = avg_vruntime(cfs_rq);
+	u64 vruntime = cfs_avg_vruntime(cfs_rq);
 	s64 delta = (s64)(vruntime - cfs_rq->zero_vruntime);
 
 	avg_vruntime_update(cfs_rq, delta);
@@ -5099,7 +5099,7 @@ void __setparam_fair(struct task_struct *p, const struct sched_attr *attr)
 static void
 place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 {
-	u64 vslice, vruntime = avg_vruntime(cfs_rq);
+	u64 vslice, vruntime = cfs_avg_vruntime(cfs_rq);
 	s64 lag = 0;
 
 	if (!se->custom_slice)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 6bfcf52a4840..47f7b6df634c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3956,7 +3956,7 @@ static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
 static inline void init_sched_mm_cid(struct task_struct *t) { }
 #endif /* !CONFIG_SCHED_MM_CID */
 
-extern u64 avg_vruntime(struct cfs_rq *cfs_rq);
+extern u64 cfs_avg_vruntime(struct cfs_rq *cfs_rq);
 extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se);
 static inline
 void move_queued_task_locked(struct rq *src_rq, struct rq *dst_rq, struct task_struct *task)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
  2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
                   ` (3 preceding siblings ...)
  2025-12-01  6:46 ` [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime() Ingo Molnar
@ 2025-12-01  6:46 ` Ingo Molnar
  2025-12-02 10:27   ` Peter Zijlstra
                     ` (2 more replies)
  2025-12-01  6:46 ` [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions Ingo Molnar
  5 siblings, 3 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

The ::avg_load field is a long-standing misnomer: it says it's an
'average load', but in reality it's the momentary sum of the load
of all currently runnable tasks. We'd have to also perform a
division by nr_running (or use time-decay) to arrive at any sort
of average value.

This is clear from comments about the math of fair scheduling:

    *              \Sum w_i := cfs_rq->avg_load

The sum of all weights is ... the sum of all weights, not
the average of all weights.

To make it doubly confusing, there's also an ::avg_load
in the load-balancing struct sg_lb_stats, which *is* a
true average.

The second part of the field's name is a minor misnomer
as well: it says 'load', and it is indeed a load_weight
structure as it shares code with the load-balancer - but
it's only in an SMP load-balancing context where
load = weight, in the fair scheduling context the primary
purpose is the weighting of different nice levels.

So rename the field to ::sum_weight instead, which makes
the terminology of the EEVDF math match up with our
implementation of it:

    *              \Sum w_i := cfs_rq->sum_weight

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c  | 16 ++++++++--------
 kernel/sched/sched.h |  2 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3d6d551168aa..2ffd52a2e7a0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -608,7 +608,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  *
  *                    v0 := cfs_rq->zero_vruntime
  * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
- *              \Sum w_i := cfs_rq->avg_load
+ *              \Sum w_i := cfs_rq->sum_weight
  *
  * Since zero_vruntime closely tracks the per-task service, these
  * deltas: (v_i - v), will be in the order of the maximal (virtual) lag
@@ -625,7 +625,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	s64 key = entity_key(cfs_rq, se);
 
 	cfs_rq->avg_vruntime += key * weight;
-	cfs_rq->avg_load += weight;
+	cfs_rq->sum_weight += weight;
 }
 
 static void
@@ -635,16 +635,16 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	s64 key = entity_key(cfs_rq, se);
 
 	cfs_rq->avg_vruntime -= key * weight;
-	cfs_rq->avg_load -= weight;
+	cfs_rq->sum_weight -= weight;
 }
 
 static inline
 void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 {
 	/*
-	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*avg_load
+	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
 	 */
-	cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta;
+	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
 }
 
 /*
@@ -655,7 +655,7 @@ u64 cfs_avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
-	long load = cfs_rq->avg_load;
+	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
 		unsigned long weight = scale_load_down(curr->load.weight);
@@ -723,7 +723,7 @@ static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
-	long load = cfs_rq->avg_load;
+	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
 		unsigned long weight = scale_load_down(curr->load.weight);
@@ -5172,7 +5172,7 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		 *
 		 *   vl_i = (W + w_i)*vl'_i / W
 		 */
-		load = cfs_rq->avg_load;
+		load = cfs_rq->sum_weight;
 		if (curr && curr->on_rq)
 			load += scale_load_down(curr->load.weight);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 47f7b6df634c..54994d93958a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -679,7 +679,7 @@ struct cfs_rq {
 	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
 	s64			avg_vruntime;
-	u64			avg_load;
+	u64			sum_weight;
 
 	u64			zero_vruntime;
 #ifdef CONFIG_SCHED_CORE
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions
  2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
                   ` (4 preceding siblings ...)
  2025-12-01  6:46 ` [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight Ingo Molnar
@ 2025-12-01  6:46 ` Ingo Molnar
  2025-12-02 10:35   ` Peter Zijlstra
                     ` (2 more replies)
  5 siblings, 3 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  6:46 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner, Ingo Molnar

The ::avg_vruntime field is a  misnomer: it says it's an
'average vruntime', but in reality it's the momentary sum
of the weighted vruntimes of all queued tasks, which is
at least a division away from being an average.

This is clear from comments about the math of fair scheduling:

    * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime

This confusion is increased by the cfs_avg_vruntime() function,
which does perform the division and returns a true average.

The sum of all weighted vruntimes should be named thusly,
so rename the field to ::sum_w_vruntime. (As arguably
::sum_weighted_vruntime would be a bit of a mouthful.)

Understanding the scheduler is hard enough already, without
extra layers of obfuscated naming. ;-)

Also rename related helper functions:

  sum_vruntime_add()    => sum_w_vruntime_add()
  sum_vruntime_sub()    => sum_w_vruntime_sub()
  sum_vruntime_update() => sum_w_vruntime_update()

With the notable exception of cfs_avg_vruntime(), which
was named accurately.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/debug.c |  2 +-
 kernel/sched/fair.c  | 26 +++++++++++++-------------
 kernel/sched/sched.h |  2 +-
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index a6ceda12bd35..b6fa5ca6a932 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -828,7 +828,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
 			SPLIT_NS(left_vruntime));
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "zero_vruntime",
 			SPLIT_NS(zero_vruntime));
-	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "avg_vruntime",
+	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "sum_w_vruntime",
 			SPLIT_NS(cfs_avg_vruntime(cfs_rq)));
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", "right_vruntime",
 			SPLIT_NS(right_vruntime));
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2ffd52a2e7a0..41ede30b74cd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -607,7 +607,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  * Which we track using:
  *
  *                    v0 := cfs_rq->zero_vruntime
- * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
+ * \Sum (v_i - v0) * w_i := cfs_rq->sum_w_vruntime
  *              \Sum w_i := cfs_rq->sum_weight
  *
  * Since zero_vruntime closely tracks the per-task service, these
@@ -619,32 +619,32 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  * As measured, the max (key * weight) value was ~44 bits for a kernel build.
  */
 static void
-avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
+sum_w_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	unsigned long weight = scale_load_down(se->load.weight);
 	s64 key = entity_key(cfs_rq, se);
 
-	cfs_rq->avg_vruntime += key * weight;
+	cfs_rq->sum_w_vruntime += key * weight;
 	cfs_rq->sum_weight += weight;
 }
 
 static void
-avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
+sum_w_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	unsigned long weight = scale_load_down(se->load.weight);
 	s64 key = entity_key(cfs_rq, se);
 
-	cfs_rq->avg_vruntime -= key * weight;
+	cfs_rq->sum_w_vruntime -= key * weight;
 	cfs_rq->sum_weight -= weight;
 }
 
 static inline
-void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
+void sum_w_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 {
 	/*
-	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
+	 * v' = v + d ==> sum_w_vruntime' = sum_runtime - d*sum_weight
 	 */
-	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
+	cfs_rq->sum_w_vruntime -= cfs_rq->sum_weight * delta;
 }
 
 /*
@@ -654,7 +654,7 @@ void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 u64 cfs_avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
-	s64 avg = cfs_rq->avg_vruntime;
+	s64 avg = cfs_rq->sum_w_vruntime;
 	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
@@ -722,7 +722,7 @@ static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
 static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
 {
 	struct sched_entity *curr = cfs_rq->curr;
-	s64 avg = cfs_rq->avg_vruntime;
+	s64 avg = cfs_rq->sum_w_vruntime;
 	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
@@ -745,7 +745,7 @@ static void update_zero_vruntime(struct cfs_rq *cfs_rq)
 	u64 vruntime = cfs_avg_vruntime(cfs_rq);
 	s64 delta = (s64)(vruntime - cfs_rq->zero_vruntime);
 
-	avg_vruntime_update(cfs_rq, delta);
+	sum_w_vruntime_update(cfs_rq, delta);
 
 	cfs_rq->zero_vruntime = vruntime;
 }
@@ -819,7 +819,7 @@ RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,
  */
 static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	avg_vruntime_add(cfs_rq, se);
+	sum_w_vruntime_add(cfs_rq, se);
 	update_zero_vruntime(cfs_rq);
 	se->min_vruntime = se->vruntime;
 	se->min_slice = se->slice;
@@ -831,7 +831,7 @@ static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,
 				  &min_vruntime_cb);
-	avg_vruntime_sub(cfs_rq, se);
+	sum_w_vruntime_sub(cfs_rq, se);
 	update_zero_vruntime(cfs_rq);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 54994d93958a..f0eb58458ff3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -678,7 +678,7 @@ struct cfs_rq {
 	unsigned int		h_nr_runnable;		/* SCHED_{NORMAL,BATCH,IDLE} */
 	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
-	s64			avg_vruntime;
+	s64			sum_w_vruntime;
 	u64			sum_weight;
 
 	u64			zero_vruntime;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/6 v2] sched/fair: Separate se->vlag from se->vprot
  2025-12-01  6:46 ` [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot Ingo Molnar
@ 2025-12-01  8:06   ` Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-01  8:06 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner


* Ingo Molnar <mingo@kernel.org> wrote:

> There's no real space concerns here and keeping these fields
> in a union makes reading (and tracing) the scheduler code harder.
> 
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>  include/linux/sched.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index e84bc5bce816..667fa08aee75 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -586,7 +586,7 @@ struct sched_entity {
>  	u64				sum_exec_runtime;
>  	u64				prev_sum_exec_runtime;
>  	u64				vruntime;
> -	union {
> +//	union {
>  		/*
>  		 * When !@on_rq this field is vlag.
>  		 * When cfs_rq->curr == se (which implies @on_rq)
> @@ -594,7 +594,7 @@ struct sched_entity {
>  		 */
>  		s64                     vlag;
>  		u64                     vprot;
> -	};
> +//	};
>  	u64				slice;
>  
>  	u64				nr_migrations;

Of course I meant the patch below. :-/

Thanks,

	Ingo

===================================>
From: Ingo Molnar <mingo@kernel.org>
Date: Wed, 26 Nov 2025 05:31:28 +0100
Subject: [PATCH] sched/fair: Separate se->vlag from se->vprot

There's no real space concerns here and keeping these fields
in a union makes reading (and tracing) the scheduler code harder.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e84bc5bce816..9aa38dc37b09 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -586,15 +586,10 @@ struct sched_entity {
 	u64				sum_exec_runtime;
 	u64				prev_sum_exec_runtime;
 	u64				vruntime;
-	union {
-		/*
-		 * When !@on_rq this field is vlag.
-		 * When cfs_rq->curr == se (which implies @on_rq)
-		 * this field is vprot. See protect_slice().
-		 */
-		s64                     vlag;
-		u64                     vprot;
-	};
+	/* Approximated virtual lag: */
+	s64				vlag;
+	/* 'Protected' deadline, to give out minimum quantums: */
+	u64				vprot;
 	u64				slice;
 
 	u64				nr_migrations;


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime()
  2025-12-01  6:46 ` [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime() Ingo Molnar
@ 2025-12-02 10:24   ` Peter Zijlstra
  2025-12-02 15:15     ` Ingo Molnar
  0 siblings, 1 reply; 26+ messages in thread
From: Peter Zijlstra @ 2025-12-02 10:24 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner

On Mon, Dec 01, 2025 at 07:46:45AM +0100, Ingo Molnar wrote:
> Since the unit of the ->avg_vruntime field isn't actually
> the same thing as the avg_vruntime() result, reduce confusion
> and rename the latter to the common cfs_*() nomenclature of
> visible global functions of the fair scheduler.

But you're going to rename both those fields into sum_weight and
sum_w_vruntime freeing up the avg_vruntime name and clearing up the
above confusion.

So why then still rename the thing? The result of the function really is
the (weighted) average of the vruntime, so the naming isn't confusing
or bad (unlike the variables it uses, which are pretty badly named).

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
  2025-12-01  6:46 ` [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight Ingo Molnar
@ 2025-12-02 10:27   ` Peter Zijlstra
  2025-12-02 15:57     ` Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 1 reply; 26+ messages in thread
From: Peter Zijlstra @ 2025-12-02 10:27 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner

On Mon, Dec 01, 2025 at 07:46:46AM +0100, Ingo Molnar wrote:
> The ::avg_load field is a long-standing misnomer: it says it's an
> 'average load', but in reality it's the momentary sum of the load
> of all currently runnable tasks. We'd have to also perform a
> division by nr_running (or use time-decay) to arrive at any sort
> of average value.
> 
> This is clear from comments about the math of fair scheduling:
> 
>     *              \Sum w_i := cfs_rq->avg_load
> 
> The sum of all weights is ... the sum of all weights, not
> the average of all weights.
> 
> To make it doubly confusing, there's also an ::avg_load
> in the load-balancing struct sg_lb_stats, which *is* a
> true average.
> 
> The second part of the field's name is a minor misnomer
> as well: it says 'load', and it is indeed a load_weight
> structure as it shares code with the load-balancer - but
> it's only in an SMP load-balancing context where
> load = weight, in the fair scheduling context the primary
> purpose is the weighting of different nice levels.
> 
> So rename the field to ::sum_weight instead, which makes
> the terminology of the EEVDF math match up with our
> implementation of it:
> 
>     *              \Sum w_i := cfs_rq->sum_weight
> 
> Signed-off-by: Ingo Molnar <mingo@kernel.org>

Bah, this is going to be a pain rebasing for me, but yes, these
variables are poorly named. 'sum_weight' is a better name.

> ---
>  kernel/sched/fair.c  | 16 ++++++++--------
>  kernel/sched/sched.h |  2 +-
>  2 files changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 3d6d551168aa..2ffd52a2e7a0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -608,7 +608,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
>   *
>   *                    v0 := cfs_rq->zero_vruntime
>   * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
> - *              \Sum w_i := cfs_rq->avg_load
> + *              \Sum w_i := cfs_rq->sum_weight
>   *
>   * Since zero_vruntime closely tracks the per-task service, these
>   * deltas: (v_i - v), will be in the order of the maximal (virtual) lag
> @@ -625,7 +625,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  	s64 key = entity_key(cfs_rq, se);
>  
>  	cfs_rq->avg_vruntime += key * weight;
> -	cfs_rq->avg_load += weight;
> +	cfs_rq->sum_weight += weight;
>  }
>  
>  static void
> @@ -635,16 +635,16 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  	s64 key = entity_key(cfs_rq, se);
>  
>  	cfs_rq->avg_vruntime -= key * weight;
> -	cfs_rq->avg_load -= weight;
> +	cfs_rq->sum_weight -= weight;
>  }
>  
>  static inline
>  void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
>  {
>  	/*
> -	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*avg_load
> +	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
>  	 */
> -	cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta;
> +	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
>  }
>  
>  /*
> @@ -655,7 +655,7 @@ u64 cfs_avg_vruntime(struct cfs_rq *cfs_rq)
>  {
>  	struct sched_entity *curr = cfs_rq->curr;
>  	s64 avg = cfs_rq->avg_vruntime;
> -	long load = cfs_rq->avg_load;
> +	long load = cfs_rq->sum_weight;
>  
>  	if (curr && curr->on_rq) {
>  		unsigned long weight = scale_load_down(curr->load.weight);
> @@ -723,7 +723,7 @@ static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
>  {
>  	struct sched_entity *curr = cfs_rq->curr;
>  	s64 avg = cfs_rq->avg_vruntime;
> -	long load = cfs_rq->avg_load;
> +	long load = cfs_rq->sum_weight;
>  
>  	if (curr && curr->on_rq) {
>  		unsigned long weight = scale_load_down(curr->load.weight);
> @@ -5172,7 +5172,7 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  		 *
>  		 *   vl_i = (W + w_i)*vl'_i / W
>  		 */
> -		load = cfs_rq->avg_load;
> +		load = cfs_rq->sum_weight;
>  		if (curr && curr->on_rq)
>  			load += scale_load_down(curr->load.weight);
>  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 47f7b6df634c..54994d93958a 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -679,7 +679,7 @@ struct cfs_rq {
>  	unsigned int		h_nr_idle;		/* SCHED_IDLE */
>  
>  	s64			avg_vruntime;
> -	u64			avg_load;
> +	u64			sum_weight;
>  
>  	u64			zero_vruntime;
>  #ifdef CONFIG_SCHED_CORE
> -- 
> 2.51.0
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions
  2025-12-01  6:46 ` [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions Ingo Molnar
@ 2025-12-02 10:35   ` Peter Zijlstra
  2025-12-02 15:19     ` Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 1 reply; 26+ messages in thread
From: Peter Zijlstra @ 2025-12-02 10:35 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner

On Mon, Dec 01, 2025 at 07:46:47AM +0100, Ingo Molnar wrote:
> The ::avg_vruntime field is a  misnomer: it says it's an
> 'average vruntime', but in reality it's the momentary sum
> of the weighted vruntimes of all queued tasks, which is
> at least a division away from being an average.
> 
> This is clear from comments about the math of fair scheduling:
> 
>     * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
> 
> This confusion is increased by the cfs_avg_vruntime() function,
> which does perform the division and returns a true average.
> 
> The sum of all weighted vruntimes should be named thusly,
> so rename the field to ::sum_w_vruntime. (As arguably
> ::sum_weighted_vruntime would be a bit of a mouthful.)
> 
> Understanding the scheduler is hard enough already, without
> extra layers of obfuscated naming. ;-)
> 
> Also rename related helper functions:
> 
>   sum_vruntime_add()    => sum_w_vruntime_add()
>   sum_vruntime_sub()    => sum_w_vruntime_sub()
>   sum_vruntime_update() => sum_w_vruntime_update()

So vruntime := runtime / w, so w*vruntime is runtime again. I'm sure
there's something there.

/me runs.

But yeah no arguments this naming needs help.

> With the notable exception of cfs_avg_vruntime(), which
> was named accurately.

But the old avg_vruntime() name was good too!

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime()
  2025-12-02 10:24   ` Peter Zijlstra
@ 2025-12-02 15:15     ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-02 15:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Mon, Dec 01, 2025 at 07:46:45AM +0100, Ingo Molnar wrote:
> > Since the unit of the ->avg_vruntime field isn't actually
> > the same thing as the avg_vruntime() result, reduce confusion
> > and rename the latter to the common cfs_*() nomenclature of
> > visible global functions of the fair scheduler.
> 
> But you're going to rename both those fields into sum_weight and
> sum_w_vruntime freeing up the avg_vruntime name and clearing up the
> above confusion.

Ah, indeed and agreed - I dropped this patch.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions
  2025-12-02 10:35   ` Peter Zijlstra
@ 2025-12-02 15:19     ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-02 15:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Mon, Dec 01, 2025 at 07:46:47AM +0100, Ingo Molnar wrote:
> > The ::avg_vruntime field is a  misnomer: it says it's an
> > 'average vruntime', but in reality it's the momentary sum
> > of the weighted vruntimes of all queued tasks, which is
> > at least a division away from being an average.
> > 
> > This is clear from comments about the math of fair scheduling:
> > 
> >     * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
> > 
> > This confusion is increased by the cfs_avg_vruntime() function,
> > which does perform the division and returns a true average.
> > 
> > The sum of all weighted vruntimes should be named thusly,
> > so rename the field to ::sum_w_vruntime. (As arguably
> > ::sum_weighted_vruntime would be a bit of a mouthful.)
> > 
> > Understanding the scheduler is hard enough already, without
> > extra layers of obfuscated naming. ;-)
> > 
> > Also rename related helper functions:
> > 
> >   sum_vruntime_add()    => sum_w_vruntime_add()
> >   sum_vruntime_sub()    => sum_w_vruntime_sub()
> >   sum_vruntime_update() => sum_w_vruntime_update()
> 
> So vruntime := runtime / w, so w*vruntime is runtime again. I'm sure
> there's something there.

Haha, yes. It's delta_exec all the way down, and turtles.

> /me runs.

> But yeah no arguments this naming needs help.
> 
> > With the notable exception of cfs_avg_vruntime(), which
> > was named accurately.
> 
> But the old avg_vruntime() name was good too!

Yeah. It's now restored in its old glory. :-)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
  2025-12-02 10:27   ` Peter Zijlstra
@ 2025-12-02 15:57     ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-02 15:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Shrikanth Hegde, Linus Torvalds, Mel Gorman,
	Steven Rostedt, Thomas Gleixner


* Peter Zijlstra <peterz@infradead.org> wrote:

> On Mon, Dec 01, 2025 at 07:46:46AM +0100, Ingo Molnar wrote:
> > The ::avg_load field is a long-standing misnomer: it says it's an
> > 'average load', but in reality it's the momentary sum of the load
> > of all currently runnable tasks. We'd have to also perform a
> > division by nr_running (or use time-decay) to arrive at any sort
> > of average value.
> > 
> > This is clear from comments about the math of fair scheduling:
> > 
> >     *              \Sum w_i := cfs_rq->avg_load
> > 
> > The sum of all weights is ... the sum of all weights, not
> > the average of all weights.
> > 
> > To make it doubly confusing, there's also an ::avg_load
> > in the load-balancing struct sg_lb_stats, which *is* a
> > true average.
> > 
> > The second part of the field's name is a minor misnomer
> > as well: it says 'load', and it is indeed a load_weight
> > structure as it shares code with the load-balancer - but
> > it's only in an SMP load-balancing context where
> > load = weight, in the fair scheduling context the primary
> > purpose is the weighting of different nice levels.
> > 
> > So rename the field to ::sum_weight instead, which makes
> > the terminology of the EEVDF math match up with our
> > implementation of it:
> > 
> >     *              \Sum w_i := cfs_rq->sum_weight
> > 
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> Bah, this is going to be a pain rebasing for me, but yes, these
> variables are poorly named. 'sum_weight' is a better name.

Fair enough, and to make this easier for you I've 
rebased your worst affected tree (queue.git:sched/flat) 
on top of the mingo/tip:WIP.sched/core-for-v6.20 tree, 
which includes these renames (with all your feedback 
addressed AFAICT), see:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/tip.git WIP.sched/flat

... and it builds and boots. :-)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks
  2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
@ 2025-12-04  5:53   ` Shrikanth Hegde
  2025-12-06 10:48     ` Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 1 reply; 26+ messages in thread
From: Shrikanth Hegde @ 2025-12-04  5:53 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Linus Torvalds, Mel Gorman, Steven Rostedt,
	Thomas Gleixner, linux-kernel



On 12/1/25 12:16 PM, Ingo Molnar wrote:
> Join two identical #ifdef blocks:
> 
>    #ifdef CONFIG_CFS_BANDWIDTH
>    ...
>    #endif
> 
>    #ifdef CONFIG_CFS_BANDWIDTH
>    ...
>    #endif
> 

nit: I think it is CONFIG_FAIR_GROUP_SCHED here and in the subject line.

> Also mark nested #ifdef blocks in the usual fashion, to make
> it more apparent where in a nested hierarchy of #ifdefs we
> are at a glance.
> 
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>   kernel/sched/sched.h | 10 ++++------
>   1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index b419a4d98461..a29965c93832 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -726,9 +726,7 @@ struct cfs_rq {
>   	unsigned long		h_load;
>   	u64			last_h_load_update;
>   	struct sched_entity	*h_load_next;
> -#endif /* CONFIG_FAIR_GROUP_SCHED */
>   
> -#ifdef CONFIG_FAIR_GROUP_SCHED
>   	struct rq		*rq;	/* CPU runqueue to which this cfs_rq is attached */
>   
>   	/*
> @@ -746,14 +744,14 @@ struct cfs_rq {
>   	/* Locally cached copy of our task_group's idle value */
>   	int			idle;
>   
> -#ifdef CONFIG_CFS_BANDWIDTH
> +# ifdef CONFIG_CFS_BANDWIDTH
>   	int			runtime_enabled;
>   	s64			runtime_remaining;
>   
>   	u64			throttled_pelt_idle;
> -#ifndef CONFIG_64BIT
> +#  ifndef CONFIG_64BIT
>   	u64                     throttled_pelt_idle_copy;
> -#endif
> +#  endif
>   	u64			throttled_clock;
>   	u64			throttled_clock_pelt;
>   	u64			throttled_clock_pelt_time;
> @@ -765,7 +763,7 @@ struct cfs_rq {
>   	struct list_head	throttled_list;
>   	struct list_head	throttled_csd_list;
>   	struct list_head        throttled_limbo_list;
> -#endif /* CONFIG_CFS_BANDWIDTH */
> +# endif /* CONFIG_CFS_BANDWIDTH */
>   #endif /* CONFIG_FAIR_GROUP_SCHED */
>   };
>   

Other than above nit,

Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks
  2025-12-04  5:53   ` Shrikanth Hegde
@ 2025-12-06 10:48     ` Ingo Molnar
  0 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2025-12-06 10:48 UTC (permalink / raw)
  To: Shrikanth Hegde
  Cc: Peter Zijlstra, Juri Lelli, Dietmar Eggemann, Valentin Schneider,
	Vincent Guittot, Linus Torvalds, Mel Gorman, Steven Rostedt,
	Thomas Gleixner, linux-kernel

* Shrikanth Hegde <sshegde@linux.ibm.com> wrote:

> On 12/1/25 12:16 PM, Ingo Molnar wrote:
> > Join two identical #ifdef blocks:
> >
> >    #ifdef CONFIG_CFS_BANDWIDTH
> >    ...
> >    #endif
> >
> >    #ifdef CONFIG_CFS_BANDWIDTH
> >    ...
> >    #endif
> >
>
> nit: I think it is CONFIG_FAIR_GROUP_SCHED here and in the subject line.

Indeed, fixed.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions
  2025-12-01  6:46 ` [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions Ingo Molnar
  2025-12-02 10:35   ` Peter Zijlstra
@ 2025-12-14  7:46   ` tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-14  7:46 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     8e9aa35061d3fc0b5ab4e9b6247a0dfa600212eb
Gitweb:        https://git.kernel.org/tip/8e9aa35061d3fc0b5ab4e9b6247a0dfa600212eb
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Tue, 02 Dec 2025 16:09:23 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:03 +01:00

sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions

The ::avg_vruntime field is a  misnomer: it says it's an
'average vruntime', but in reality it's the momentary sum
of the weighted vruntimes of all queued tasks, which is
at least a division away from being an average.

This is clear from comments about the math of fair scheduling:

    * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime

This confusion is increased by the cfs_avg_vruntime() function,
which does perform the division and returns a true average.

The sum of all weighted vruntimes should be named thusly,
so rename the field to ::sum_w_vruntime. (As arguably
::sum_weighted_vruntime would be a bit of a mouthful.)

Understanding the scheduler is hard enough already, without
extra layers of obfuscated naming. ;-)

Also rename related helper functions:

  sum_vruntime_add()    => sum_w_vruntime_add()
  sum_vruntime_sub()    => sum_w_vruntime_sub()
  sum_vruntime_update() => sum_w_vruntime_update()

With the notable exception of cfs_avg_vruntime(), which
was named accurately.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-7-mingo@kernel.org
---
 kernel/sched/fair.c  | 26 +++++++++++++-------------
 kernel/sched/sched.h |  2 +-
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8e21a95..050ed08 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -607,7 +607,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  * Which we track using:
  *
  *                    v0 := cfs_rq->zero_vruntime
- * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
+ * \Sum (v_i - v0) * w_i := cfs_rq->sum_w_vruntime
  *              \Sum w_i := cfs_rq->sum_weight
  *
  * Since zero_vruntime closely tracks the per-task service, these
@@ -619,32 +619,32 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  * As measured, the max (key * weight) value was ~44 bits for a kernel build.
  */
 static void
-avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
+sum_w_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	unsigned long weight = scale_load_down(se->load.weight);
 	s64 key = entity_key(cfs_rq, se);
 
-	cfs_rq->avg_vruntime += key * weight;
+	cfs_rq->sum_w_vruntime += key * weight;
 	cfs_rq->sum_weight += weight;
 }
 
 static void
-avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
+sum_w_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	unsigned long weight = scale_load_down(se->load.weight);
 	s64 key = entity_key(cfs_rq, se);
 
-	cfs_rq->avg_vruntime -= key * weight;
+	cfs_rq->sum_w_vruntime -= key * weight;
 	cfs_rq->sum_weight -= weight;
 }
 
 static inline
-void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
+void sum_w_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 {
 	/*
-	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
+	 * v' = v + d ==> sum_w_vruntime' = sum_runtime - d*sum_weight
 	 */
-	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
+	cfs_rq->sum_w_vruntime -= cfs_rq->sum_weight * delta;
 }
 
 /*
@@ -654,7 +654,7 @@ void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 u64 avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
-	s64 avg = cfs_rq->avg_vruntime;
+	s64 avg = cfs_rq->sum_w_vruntime;
 	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
@@ -722,7 +722,7 @@ static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
 static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
 {
 	struct sched_entity *curr = cfs_rq->curr;
-	s64 avg = cfs_rq->avg_vruntime;
+	s64 avg = cfs_rq->sum_w_vruntime;
 	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
@@ -745,7 +745,7 @@ static void update_zero_vruntime(struct cfs_rq *cfs_rq)
 	u64 vruntime = avg_vruntime(cfs_rq);
 	s64 delta = (s64)(vruntime - cfs_rq->zero_vruntime);
 
-	avg_vruntime_update(cfs_rq, delta);
+	sum_w_vruntime_update(cfs_rq, delta);
 
 	cfs_rq->zero_vruntime = vruntime;
 }
@@ -819,7 +819,7 @@ RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,
  */
 static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	avg_vruntime_add(cfs_rq, se);
+	sum_w_vruntime_add(cfs_rq, se);
 	update_zero_vruntime(cfs_rq);
 	se->min_vruntime = se->vruntime;
 	se->min_slice = se->slice;
@@ -831,7 +831,7 @@ static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,
 				  &min_vruntime_cb);
-	avg_vruntime_sub(cfs_rq, se);
+	sum_w_vruntime_sub(cfs_rq, se);
 	update_zero_vruntime(cfs_rq);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e3e9974..bdb1e74 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -678,7 +678,7 @@ struct cfs_rq {
 	unsigned int		h_nr_runnable;		/* SCHED_{NORMAL,BATCH,IDLE} */
 	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
-	s64			avg_vruntime;
+	s64			sum_w_vruntime;
 	u64			sum_weight;
 
 	u64			zero_vruntime;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
  2025-12-01  6:46 ` [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight Ingo Molnar
  2025-12-02 10:27   ` Peter Zijlstra
@ 2025-12-14  7:46   ` tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-14  7:46 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     969c658869ff1c3998a449d2602c68b1d4b1ce06
Gitweb:        https://git.kernel.org/tip/969c658869ff1c3998a449d2602c68b1d4b1ce06
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 12:09:16 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:03 +01:00

sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight

The ::avg_load field is a long-standing misnomer: it says it's an
'average load', but in reality it's the momentary sum of the load
of all currently runnable tasks. We'd have to also perform a
division by nr_running (or use time-decay) to arrive at any sort
of average value.

This is clear from comments about the math of fair scheduling:

    *              \Sum w_i := cfs_rq->avg_load

The sum of all weights is ... the sum of all weights, not
the average of all weights.

To make it doubly confusing, there's also an ::avg_load
in the load-balancing struct sg_lb_stats, which *is* a
true average.

The second part of the field's name is a minor misnomer
as well: it says 'load', and it is indeed a load_weight
structure as it shares code with the load-balancer - but
it's only in an SMP load-balancing context where
load = weight, in the fair scheduling context the primary
purpose is the weighting of different nice levels.

So rename the field to ::sum_weight instead, which makes
the terminology of the EEVDF math match up with our
implementation of it:

    *              \Sum w_i := cfs_rq->sum_weight

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-6-mingo@kernel.org
---
 kernel/sched/fair.c  | 16 ++++++++--------
 kernel/sched/sched.h |  2 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ea276d8..8e21a95 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -608,7 +608,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  *
  *                    v0 := cfs_rq->zero_vruntime
  * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
- *              \Sum w_i := cfs_rq->avg_load
+ *              \Sum w_i := cfs_rq->sum_weight
  *
  * Since zero_vruntime closely tracks the per-task service, these
  * deltas: (v_i - v), will be in the order of the maximal (virtual) lag
@@ -625,7 +625,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	s64 key = entity_key(cfs_rq, se);
 
 	cfs_rq->avg_vruntime += key * weight;
-	cfs_rq->avg_load += weight;
+	cfs_rq->sum_weight += weight;
 }
 
 static void
@@ -635,16 +635,16 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	s64 key = entity_key(cfs_rq, se);
 
 	cfs_rq->avg_vruntime -= key * weight;
-	cfs_rq->avg_load -= weight;
+	cfs_rq->sum_weight -= weight;
 }
 
 static inline
 void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 {
 	/*
-	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*avg_load
+	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
 	 */
-	cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta;
+	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
 }
 
 /*
@@ -655,7 +655,7 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
-	long load = cfs_rq->avg_load;
+	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
 		unsigned long weight = scale_load_down(curr->load.weight);
@@ -723,7 +723,7 @@ static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
-	long load = cfs_rq->avg_load;
+	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
 		unsigned long weight = scale_load_down(curr->load.weight);
@@ -5131,7 +5131,7 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		 *
 		 *   vl_i = (W + w_i)*vl'_i / W
 		 */
-		load = cfs_rq->avg_load;
+		load = cfs_rq->sum_weight;
 		if (curr && curr->on_rq)
 			load += scale_load_down(curr->load.weight);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4ddb755..e3e9974 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -679,7 +679,7 @@ struct cfs_rq {
 	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
 	s64			avg_vruntime;
-	u64			avg_load;
+	u64			sum_weight;
 
 	u64			zero_vruntime;
 #ifdef CONFIG_SCHED_CORE

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Separate se->vlag from se->vprot
  2025-12-01  6:46 ` [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot Ingo Molnar
  2025-12-01  8:06   ` [PATCH 3/6 v2] " Ingo Molnar
@ 2025-12-14  7:46   ` tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-14  7:46 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     6b6d09f274bd6e5bed2751b3cab23e64f19c9e59
Gitweb:        https://git.kernel.org/tip/6b6d09f274bd6e5bed2751b3cab23e64f19c9e59
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 05:31:28 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:02 +01:00

sched/fair: Separate se->vlag from se->vprot

There's no real space concerns here and keeping these fields
in a union makes reading (and tracing) the scheduler code harder.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-4-mingo@kernel.org
---
 include/linux/sched.h | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index d395f28..bf96a7d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -586,15 +586,10 @@ struct sched_entity {
 	u64				sum_exec_runtime;
 	u64				prev_sum_exec_runtime;
 	u64				vruntime;
-	union {
-		/*
-		 * When !@on_rq this field is vlag.
-		 * When cfs_rq->curr == se (which implies @on_rq)
-		 * this field is vprot. See protect_slice().
-		 */
-		s64                     vlag;
-		u64                     vprot;
-	};
+	/* Approximated virtual lag: */
+	s64				vlag;
+	/* 'Protected' deadline, to give out minimum quantums: */
+	u64				vprot;
 	u64				slice;
 
 	u64				nr_migrations;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Clean up comments in 'struct cfs_rq'
  2025-12-01  6:46 ` [PATCH 2/6] sched/fair: Clean up comments in 'struct cfs_rq' Ingo Molnar
@ 2025-12-14  7:46   ` tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  1 sibling, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-14  7:46 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     3dbcec53616803b71e00084a94426a017f0ef04f
Gitweb:        https://git.kernel.org/tip/3dbcec53616803b71e00084a94426a017f0ef04f
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 11:29:18 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:02 +01:00

sched/fair: Clean up comments in 'struct cfs_rq'

 - Fix vertical alignment
 - Fix typos
 - Fix capitalization

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-3-mingo@kernel.org
---
 kernel/sched/sched.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index f751ac3..4ddb755 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -670,13 +670,13 @@ struct balance_callback {
 	void (*func)(struct rq *rq);
 };
 
-/* CFS-related fields in a runqueue */
+/* Fair scheduling SCHED_{NORMAL,BATCH,IDLE} related fields in a runqueue: */
 struct cfs_rq {
 	struct load_weight	load;
 	unsigned int		nr_queued;
-	unsigned int		h_nr_queued;       /* SCHED_{NORMAL,BATCH,IDLE} */
-	unsigned int		h_nr_runnable;     /* SCHED_{NORMAL,BATCH,IDLE} */
-	unsigned int		h_nr_idle; /* SCHED_IDLE */
+	unsigned int		h_nr_queued;		/* SCHED_{NORMAL,BATCH,IDLE} */
+	unsigned int		h_nr_runnable;		/* SCHED_{NORMAL,BATCH,IDLE} */
+	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
 	s64			avg_vruntime;
 	u64			avg_load;
@@ -690,7 +690,7 @@ struct cfs_rq {
 	struct rb_root_cached	tasks_timeline;
 
 	/*
-	 * 'curr' points to currently running entity on this cfs_rq.
+	 * 'curr' points to the currently running entity on this cfs_rq.
 	 * It is set to NULL otherwise (i.e when none are currently running).
 	 */
 	struct sched_entity	*curr;
@@ -739,7 +739,7 @@ struct cfs_rq {
 	 */
 	int			on_list;
 	struct list_head	leaf_cfs_rq_list;
-	struct task_group	*tg;	/* group that "owns" this runqueue */
+	struct task_group	*tg;	/* Group that "owns" this runqueue */
 
 	/* Locally cached copy of our task_group's idle value */
 	int			idle;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks
  2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
  2025-12-04  5:53   ` Shrikanth Hegde
@ 2025-12-14  7:46   ` tip-bot2 for Ingo Molnar
  2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-14  7:46 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     7e1f73851205657e90e34a4013d86d129ed514a1
Gitweb:        https://git.kernel.org/tip/7e1f73851205657e90e34a4013d86d129ed514a1
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 11:31:09 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Sun, 14 Dec 2025 08:25:02 +01:00

sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks

Join two identical #ifdef blocks:

  #ifdef CONFIG_FAIR_GROUP_SCHED
  ...
  #endif

  #ifdef CONFIG_FAIR_GROUP_SCHED
  ...
  #endif

Also mark nested #ifdef blocks in the usual fashion, to make
it more apparent where in a nested hierarchy of #ifdefs we
are at a glance.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://patch.msgid.link/20251201064647.1851919-2-mingo@kernel.org
---
 kernel/sched/sched.h | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 467ea31..f751ac3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -726,9 +726,7 @@ struct cfs_rq {
 	unsigned long		h_load;
 	u64			last_h_load_update;
 	struct sched_entity	*h_load_next;
-#endif /* CONFIG_FAIR_GROUP_SCHED */
 
-#ifdef CONFIG_FAIR_GROUP_SCHED
 	struct rq		*rq;	/* CPU runqueue to which this cfs_rq is attached */
 
 	/*
@@ -746,14 +744,14 @@ struct cfs_rq {
 	/* Locally cached copy of our task_group's idle value */
 	int			idle;
 
-#ifdef CONFIG_CFS_BANDWIDTH
+# ifdef CONFIG_CFS_BANDWIDTH
 	int			runtime_enabled;
 	s64			runtime_remaining;
 
 	u64			throttled_pelt_idle;
-#ifndef CONFIG_64BIT
+#  ifndef CONFIG_64BIT
 	u64                     throttled_pelt_idle_copy;
-#endif
+#  endif
 	u64			throttled_clock;
 	u64			throttled_clock_pelt;
 	u64			throttled_clock_pelt_time;
@@ -765,7 +763,7 @@ struct cfs_rq {
 	struct list_head	throttled_list;
 	struct list_head	throttled_csd_list;
 	struct list_head        throttled_limbo_list;
-#endif /* CONFIG_CFS_BANDWIDTH */
+# endif /* CONFIG_CFS_BANDWIDTH */
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 };
 

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions
  2025-12-01  6:46 ` [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions Ingo Molnar
  2025-12-02 10:35   ` Peter Zijlstra
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
@ 2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-15  7:59 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     dcbc9d3f0e594223275a18f7016001889ad35eff
Gitweb:        https://git.kernel.org/tip/dcbc9d3f0e594223275a18f7016001889ad35eff
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Tue, 02 Dec 2025 16:09:23 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 15 Dec 2025 07:52:44 +01:00

sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions

The ::avg_vruntime field is a  misnomer: it says it's an
'average vruntime', but in reality it's the momentary sum
of the weighted vruntimes of all queued tasks, which is
at least a division away from being an average.

This is clear from comments about the math of fair scheduling:

    * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime

This confusion is increased by the cfs_avg_vruntime() function,
which does perform the division and returns a true average.

The sum of all weighted vruntimes should be named thusly,
so rename the field to ::sum_w_vruntime. (As arguably
::sum_weighted_vruntime would be a bit of a mouthful.)

Understanding the scheduler is hard enough already, without
extra layers of obfuscated naming. ;-)

Also rename related helper functions:

  sum_vruntime_add()    => sum_w_vruntime_add()
  sum_vruntime_sub()    => sum_w_vruntime_sub()
  sum_vruntime_update() => sum_w_vruntime_update()

With the notable exception of cfs_avg_vruntime(), which
was named accurately.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-7-mingo@kernel.org
---
 kernel/sched/fair.c  | 26 +++++++++++++-------------
 kernel/sched/sched.h |  2 +-
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 65b1065..dcbd995 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -607,7 +607,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  * Which we track using:
  *
  *                    v0 := cfs_rq->zero_vruntime
- * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
+ * \Sum (v_i - v0) * w_i := cfs_rq->sum_w_vruntime
  *              \Sum w_i := cfs_rq->sum_weight
  *
  * Since zero_vruntime closely tracks the per-task service, these
@@ -619,32 +619,32 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  * As measured, the max (key * weight) value was ~44 bits for a kernel build.
  */
 static void
-avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
+sum_w_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	unsigned long weight = scale_load_down(se->load.weight);
 	s64 key = entity_key(cfs_rq, se);
 
-	cfs_rq->avg_vruntime += key * weight;
+	cfs_rq->sum_w_vruntime += key * weight;
 	cfs_rq->sum_weight += weight;
 }
 
 static void
-avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
+sum_w_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	unsigned long weight = scale_load_down(se->load.weight);
 	s64 key = entity_key(cfs_rq, se);
 
-	cfs_rq->avg_vruntime -= key * weight;
+	cfs_rq->sum_w_vruntime -= key * weight;
 	cfs_rq->sum_weight -= weight;
 }
 
 static inline
-void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
+void sum_w_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 {
 	/*
-	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
+	 * v' = v + d ==> sum_w_vruntime' = sum_runtime - d*sum_weight
 	 */
-	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
+	cfs_rq->sum_w_vruntime -= cfs_rq->sum_weight * delta;
 }
 
 /*
@@ -654,7 +654,7 @@ void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 u64 avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
-	s64 avg = cfs_rq->avg_vruntime;
+	s64 avg = cfs_rq->sum_w_vruntime;
 	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
@@ -722,7 +722,7 @@ static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)
 static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
 {
 	struct sched_entity *curr = cfs_rq->curr;
-	s64 avg = cfs_rq->avg_vruntime;
+	s64 avg = cfs_rq->sum_w_vruntime;
 	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
@@ -745,7 +745,7 @@ static void update_zero_vruntime(struct cfs_rq *cfs_rq)
 	u64 vruntime = avg_vruntime(cfs_rq);
 	s64 delta = (s64)(vruntime - cfs_rq->zero_vruntime);
 
-	avg_vruntime_update(cfs_rq, delta);
+	sum_w_vruntime_update(cfs_rq, delta);
 
 	cfs_rq->zero_vruntime = vruntime;
 }
@@ -819,7 +819,7 @@ RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,
  */
 static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	avg_vruntime_add(cfs_rq, se);
+	sum_w_vruntime_add(cfs_rq, se);
 	update_zero_vruntime(cfs_rq);
 	se->min_vruntime = se->vruntime;
 	se->min_slice = se->slice;
@@ -831,7 +831,7 @@ static void __dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
 	rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,
 				  &min_vruntime_cb);
-	avg_vruntime_sub(cfs_rq, se);
+	sum_w_vruntime_sub(cfs_rq, se);
 	update_zero_vruntime(cfs_rq);
 }
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3334aa5..ab1bfa0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -678,7 +678,7 @@ struct cfs_rq {
 	unsigned int		h_nr_runnable;		/* SCHED_{NORMAL,BATCH,IDLE} */
 	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
-	s64			avg_vruntime;
+	s64			sum_w_vruntime;
 	u64			sum_weight;
 
 	u64			zero_vruntime;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight
  2025-12-01  6:46 ` [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight Ingo Molnar
  2025-12-02 10:27   ` Peter Zijlstra
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
@ 2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-15  7:59 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     4ff674fa986c27ec8a0542479258c92d361a2566
Gitweb:        https://git.kernel.org/tip/4ff674fa986c27ec8a0542479258c92d361a2566
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 12:09:16 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 15 Dec 2025 07:52:44 +01:00

sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight

The ::avg_load field is a long-standing misnomer: it says it's an
'average load', but in reality it's the momentary sum of the load
of all currently runnable tasks. We'd have to also perform a
division by nr_running (or use time-decay) to arrive at any sort
of average value.

This is clear from comments about the math of fair scheduling:

    *              \Sum w_i := cfs_rq->avg_load

The sum of all weights is ... the sum of all weights, not
the average of all weights.

To make it doubly confusing, there's also an ::avg_load
in the load-balancing struct sg_lb_stats, which *is* a
true average.

The second part of the field's name is a minor misnomer
as well: it says 'load', and it is indeed a load_weight
structure as it shares code with the load-balancer - but
it's only in an SMP load-balancing context where
load = weight, in the fair scheduling context the primary
purpose is the weighting of different nice levels.

So rename the field to ::sum_weight instead, which makes
the terminology of the EEVDF math match up with our
implementation of it:

    *              \Sum w_i := cfs_rq->sum_weight

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-6-mingo@kernel.org
---
 kernel/sched/fair.c  | 16 ++++++++--------
 kernel/sched/sched.h |  2 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f79951f..65b1065 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -608,7 +608,7 @@ static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)
  *
  *                    v0 := cfs_rq->zero_vruntime
  * \Sum (v_i - v0) * w_i := cfs_rq->avg_vruntime
- *              \Sum w_i := cfs_rq->avg_load
+ *              \Sum w_i := cfs_rq->sum_weight
  *
  * Since zero_vruntime closely tracks the per-task service, these
  * deltas: (v_i - v), will be in the order of the maximal (virtual) lag
@@ -625,7 +625,7 @@ avg_vruntime_add(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	s64 key = entity_key(cfs_rq, se);
 
 	cfs_rq->avg_vruntime += key * weight;
-	cfs_rq->avg_load += weight;
+	cfs_rq->sum_weight += weight;
 }
 
 static void
@@ -635,16 +635,16 @@ avg_vruntime_sub(struct cfs_rq *cfs_rq, struct sched_entity *se)
 	s64 key = entity_key(cfs_rq, se);
 
 	cfs_rq->avg_vruntime -= key * weight;
-	cfs_rq->avg_load -= weight;
+	cfs_rq->sum_weight -= weight;
 }
 
 static inline
 void avg_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)
 {
 	/*
-	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*avg_load
+	 * v' = v + d ==> avg_vruntime' = avg_runtime - d*sum_weight
 	 */
-	cfs_rq->avg_vruntime -= cfs_rq->avg_load * delta;
+	cfs_rq->avg_vruntime -= cfs_rq->sum_weight * delta;
 }
 
 /*
@@ -655,7 +655,7 @@ u64 avg_vruntime(struct cfs_rq *cfs_rq)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
-	long load = cfs_rq->avg_load;
+	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
 		unsigned long weight = scale_load_down(curr->load.weight);
@@ -723,7 +723,7 @@ static int vruntime_eligible(struct cfs_rq *cfs_rq, u64 vruntime)
 {
 	struct sched_entity *curr = cfs_rq->curr;
 	s64 avg = cfs_rq->avg_vruntime;
-	long load = cfs_rq->avg_load;
+	long load = cfs_rq->sum_weight;
 
 	if (curr && curr->on_rq) {
 		unsigned long weight = scale_load_down(curr->load.weight);
@@ -5131,7 +5131,7 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		 *
 		 *   vl_i = (W + w_i)*vl'_i / W
 		 */
-		load = cfs_rq->avg_load;
+		load = cfs_rq->sum_weight;
 		if (curr && curr->on_rq)
 			load += scale_load_down(curr->load.weight);
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 82522c9..3334aa5 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -679,7 +679,7 @@ struct cfs_rq {
 	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
 	s64			avg_vruntime;
-	u64			avg_load;
+	u64			sum_weight;
 
 	u64			zero_vruntime;
 #ifdef CONFIG_SCHED_CORE

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Separate se->vlag from se->vprot
  2025-12-01  6:46 ` [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot Ingo Molnar
  2025-12-01  8:06   ` [PATCH 3/6 v2] " Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
@ 2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-15  7:59 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     80390ead2080071cbd6f427ff8deb94d10a4a50f
Gitweb:        https://git.kernel.org/tip/80390ead2080071cbd6f427ff8deb94d10a4a50f
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 05:31:28 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 15 Dec 2025 07:52:44 +01:00

sched/fair: Separate se->vlag from se->vprot

There's no real space concerns here and keeping these fields
in a union makes reading (and tracing) the scheduler code harder.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-4-mingo@kernel.org
---
 include/linux/sched.h | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index d395f28..bf96a7d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -586,15 +586,10 @@ struct sched_entity {
 	u64				sum_exec_runtime;
 	u64				prev_sum_exec_runtime;
 	u64				vruntime;
-	union {
-		/*
-		 * When !@on_rq this field is vlag.
-		 * When cfs_rq->curr == se (which implies @on_rq)
-		 * this field is vprot. See protect_slice().
-		 */
-		s64                     vlag;
-		u64                     vprot;
-	};
+	/* Approximated virtual lag: */
+	s64				vlag;
+	/* 'Protected' deadline, to give out minimum quantums: */
+	u64				vprot;
 	u64				slice;
 
 	u64				nr_migrations;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Clean up comments in 'struct cfs_rq'
  2025-12-01  6:46 ` [PATCH 2/6] sched/fair: Clean up comments in 'struct cfs_rq' Ingo Molnar
  2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
@ 2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  1 sibling, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-15  7:59 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     fb9a7458e508ef1beae8d80ee40c2cd1b5b45f3a
Gitweb:        https://git.kernel.org/tip/fb9a7458e508ef1beae8d80ee40c2cd1b5b45f3a
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 11:29:18 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 15 Dec 2025 07:52:44 +01:00

sched/fair: Clean up comments in 'struct cfs_rq'

 - Fix vertical alignment
 - Fix typos
 - Fix capitalization

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://patch.msgid.link/20251201064647.1851919-3-mingo@kernel.org
---
 kernel/sched/sched.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2173e3d..82522c9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -670,13 +670,13 @@ struct balance_callback {
 	void (*func)(struct rq *rq);
 };
 
-/* CFS-related fields in a runqueue */
+/* Fair scheduling SCHED_{NORMAL,BATCH,IDLE} related fields in a runqueue: */
 struct cfs_rq {
 	struct load_weight	load;
 	unsigned int		nr_queued;
-	unsigned int		h_nr_queued;       /* SCHED_{NORMAL,BATCH,IDLE} */
-	unsigned int		h_nr_runnable;     /* SCHED_{NORMAL,BATCH,IDLE} */
-	unsigned int		h_nr_idle; /* SCHED_IDLE */
+	unsigned int		h_nr_queued;		/* SCHED_{NORMAL,BATCH,IDLE} */
+	unsigned int		h_nr_runnable;		/* SCHED_{NORMAL,BATCH,IDLE} */
+	unsigned int		h_nr_idle;		/* SCHED_IDLE */
 
 	s64			avg_vruntime;
 	u64			avg_load;
@@ -690,7 +690,7 @@ struct cfs_rq {
 	struct rb_root_cached	tasks_timeline;
 
 	/*
-	 * 'curr' points to currently running entity on this cfs_rq.
+	 * 'curr' points to the currently running entity on this cfs_rq.
 	 * It is set to NULL otherwise (i.e when none are currently running).
 	 */
 	struct sched_entity	*curr;
@@ -739,7 +739,7 @@ struct cfs_rq {
 	 */
 	int			on_list;
 	struct list_head	leaf_cfs_rq_list;
-	struct task_group	*tg;	/* group that "owns" this runqueue */
+	struct task_group	*tg;	/* Group that "owns" this runqueue */
 
 	/* Locally cached copy of our task_group's idle value */
 	int			idle;

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [tip: sched/core] sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks
  2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
  2025-12-04  5:53   ` Shrikanth Hegde
  2025-12-14  7:46   ` [tip: sched/core] sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks tip-bot2 for Ingo Molnar
@ 2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
  2 siblings, 0 replies; 26+ messages in thread
From: tip-bot2 for Ingo Molnar @ 2025-12-15  7:59 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Ingo Molnar, Shrikanth Hegde, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     2b8c3d3dc9b1ee323e2982945088e3f5eebdf3dd
Gitweb:        https://git.kernel.org/tip/2b8c3d3dc9b1ee323e2982945088e3f5eebdf3dd
Author:        Ingo Molnar <mingo@kernel.org>
AuthorDate:    Wed, 26 Nov 2025 11:31:09 +01:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 15 Dec 2025 07:52:44 +01:00

sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks

Join two identical #ifdef blocks:

  #ifdef CONFIG_FAIR_GROUP_SCHED
  ...
  #endif

  #ifdef CONFIG_FAIR_GROUP_SCHED
  ...
  #endif

Also mark nested #ifdef blocks in the usual fashion, to make
it more apparent where in a nested hierarchy of #ifdefs we
are at a glance.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://patch.msgid.link/20251201064647.1851919-2-mingo@kernel.org
---
 kernel/sched/sched.h | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a40582d..2173e3d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -726,9 +726,7 @@ struct cfs_rq {
 	unsigned long		h_load;
 	u64			last_h_load_update;
 	struct sched_entity	*h_load_next;
-#endif /* CONFIG_FAIR_GROUP_SCHED */
 
-#ifdef CONFIG_FAIR_GROUP_SCHED
 	struct rq		*rq;	/* CPU runqueue to which this cfs_rq is attached */
 
 	/*
@@ -746,14 +744,14 @@ struct cfs_rq {
 	/* Locally cached copy of our task_group's idle value */
 	int			idle;
 
-#ifdef CONFIG_CFS_BANDWIDTH
+# ifdef CONFIG_CFS_BANDWIDTH
 	int			runtime_enabled;
 	s64			runtime_remaining;
 
 	u64			throttled_pelt_idle;
-#ifndef CONFIG_64BIT
+#  ifndef CONFIG_64BIT
 	u64                     throttled_pelt_idle_copy;
-#endif
+#  endif
 	u64			throttled_clock;
 	u64			throttled_clock_pelt;
 	u64			throttled_clock_pelt_time;
@@ -765,7 +763,7 @@ struct cfs_rq {
 	struct list_head	throttled_list;
 	struct list_head	throttled_csd_list;
 	struct list_head        throttled_limbo_list;
-#endif /* CONFIG_CFS_BANDWIDTH */
+# endif /* CONFIG_CFS_BANDWIDTH */
 #endif /* CONFIG_FAIR_GROUP_SCHED */
 };
 

^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2025-12-15  7:59 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-01  6:46 [PATCH 0/6] sched: Misc cleanups Ingo Molnar
2025-12-01  6:46 ` [PATCH 1/6] sched/fair: Join two #ifdef CONFIG_CFS_BANDWIDTH blocks Ingo Molnar
2025-12-04  5:53   ` Shrikanth Hegde
2025-12-06 10:48     ` Ingo Molnar
2025-12-14  7:46   ` [tip: sched/core] sched/fair: Join two #ifdef CONFIG_FAIR_GROUP_SCHED blocks tip-bot2 for Ingo Molnar
2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
2025-12-01  6:46 ` [PATCH 2/6] sched/fair: Clean up comments in 'struct cfs_rq' Ingo Molnar
2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
2025-12-01  6:46 ` [PATCH 3/6] sched/fair: Separate se->vlag from se->vprot Ingo Molnar
2025-12-01  8:06   ` [PATCH 3/6 v2] " Ingo Molnar
2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
2025-12-01  6:46 ` [PATCH 4/6] sched/fair: Rename avg_vruntime() to cfs_avg_vruntime() Ingo Molnar
2025-12-02 10:24   ` Peter Zijlstra
2025-12-02 15:15     ` Ingo Molnar
2025-12-01  6:46 ` [PATCH 5/6] sched/fair: Rename cfs_rq::avg_load to cfs_rq::sum_weight Ingo Molnar
2025-12-02 10:27   ` Peter Zijlstra
2025-12-02 15:57     ` Ingo Molnar
2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2025-12-15  7:59   ` tip-bot2 for Ingo Molnar
2025-12-01  6:46 ` [PATCH 6/6] sched/fair: Rename cfs_rq::avg_vruntime to ::sum_w_vruntime, and helper functions Ingo Molnar
2025-12-02 10:35   ` Peter Zijlstra
2025-12-02 15:19     ` Ingo Molnar
2025-12-14  7:46   ` [tip: sched/core] " tip-bot2 for Ingo Molnar
2025-12-15  7:59   ` tip-bot2 for Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox