linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] sched/fair: Dynamic asym priority support
@ 2025-04-09  5:34 K Prateek Nayak
  2025-04-09  5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: K Prateek Nayak @ 2025-04-09  5:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
	Viresh Kumar, linux-pm, linux-kernel
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak

A subset of AMD Processors which support Preferred Core rankings can
have these rankings change at runtime to bias the load balancing towards
CPUs with higher frequency / larger cache.

In the current implementation, the CPU with the highest asym priority -
"asym_prefer_cpu" is cached in the sched_group struct when building the
sched domain hierarchy.

Previous approach in [1] to uncache the "asym_prefer_cpu" and compute it
during load balancing was not popular as it not only lost the benefits
of caching but also added more overhead in update_sg_lb_stats().

At OSPM'25, Vincent suggested retaining "asym_prefer_cpu" but updating
it dynamically when the asym priority changes without needing to
rebuild the entire sched domain hierarchy.

Introduce sched_update_asym_prefer_cpu() which traverses the local
hierarchy on priority change and recomputes the "asym_prefer_cpu". Since
sched_group for !SD_OVERLAP domains are shared by all the CPUs in
sched_group_span(sg) (see get_group() in kernel/sched/topology.c),
updating the "asym_prefer_cpu" in the groups of the local hierarchy
ensures all the CPUs in the group see the updated value.

Groups of SD_OVERLAP domains can be supported too but this involves
moving "asym_prefer_cpu" to "sg->sgc" which adds another level of
indirection. Since there isn't a use case currently where both
SD_OVERLAP and SD_ASYM_PACKING is set for the same sched domain, v2
keeps things simple only extends dynamic updates to groups of
!SD_OVERLAP domains. If this future looking enablement is required,
please do let me know.

Printing the "asym_prefer_cpu" for the local group in debugfs has not
only proved useful to debug this series but has also helped uncover
other unrelated issues like [2] which is why I've retained it for
inclusion.

This series is based on:

  git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core

at commit 6432e163ba1b ("sched/isolation: Make use of more than one
housekeeping cpu")

[1] https://lore.kernel.org/lkml/20241211185552.4553-9-kprateek.nayak@amd.com/
[2] https://lore.kernel.org/lkml/20250409030004.23008-1-kprateek.nayak@amd.com/
---
Changelog:

v1..v2:

o New approach that introduces sched_update_asym_prefer_cpu() to update
  the "asym_prefer_cpu" dynamically on ranking change without rebuilding
  the sched domain hierarchy.
---
K Prateek Nayak (4):
  sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
  sched/topology: Introduce sched_update_asym_prefer_cpu()
  cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
  sched/debug: Print the local group's asym_prefer_cpu

 drivers/cpufreq/amd-pstate.c   |  4 ++-
 include/linux/sched/topology.h |  6 ++++
 kernel/sched/debug.c           |  4 +++
 kernel/sched/fair.c            |  5 +--
 kernel/sched/topology.c        | 58 ++++++++++++++++++++++++++++++++++
 5 files changed, 74 insertions(+), 3 deletions(-)


base-commit: 6432e163ba1b7d80b5876792ce53e511f041ab91
-- 
2.34.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
  2025-04-09  5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
@ 2025-04-09  5:34 ` K Prateek Nayak
  2025-04-09  5:34 ` [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu() K Prateek Nayak
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: K Prateek Nayak @ 2025-04-09  5:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
	Viresh Kumar, linux-pm, linux-kernel
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak

Subsequent commits add the support to dynamically update the sched_group
struct's "asym_prefer_cpu" member from a remote CPU. Use READ_ONCE()
when reading the "sg->asym_prefer_cpu" to ensure load balancer always
reads the latest value.

Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/fair.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0c19459c8042..5e1bd9e8464c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10251,7 +10251,7 @@ sched_group_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group
 	    (sgs->group_weight - sgs->idle_cpus != 1))
 		return false;
 
-	return sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);
+	return sched_asym(env->sd, env->dst_cpu, READ_ONCE(group->asym_prefer_cpu));
 }
 
 /* One group has more than one SMT CPU while the other group does not */
@@ -10488,7 +10488,8 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 
 	case group_asym_packing:
 		/* Prefer to move from lowest priority CPU's work */
-		return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
+		return sched_asym_prefer(READ_ONCE(sds->busiest->asym_prefer_cpu),
+					 READ_ONCE(sg->asym_prefer_cpu));
 
 	case group_misfit_task:
 		/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu()
  2025-04-09  5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
  2025-04-09  5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
@ 2025-04-09  5:34 ` K Prateek Nayak
  2025-04-09  5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: K Prateek Nayak @ 2025-04-09  5:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
	Viresh Kumar, linux-pm, linux-kernel
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak

A subset of AMD Processors supporting Preferred Core Rankings also
feature the ability to dynamically switch these rankings at runtime to
bias load balancing towards or away from the LLC domain with larger
cache.

To support dynamically updating "sg->asym_prefer_cpu" without needing to
rebuild the sched domain, introduce sched_update_asym_prefer_cpu() which
recomutes the "asym_prefer_cpu" when the core-ranking of a CPU changes.

sched_update_asym_prefer_cpu() swaps the "sg->asym_prefer_cpu" with the
CPU whose ranking has changed if the new ranking is greater than that of
the "asym_prefer_cpu". If CPU whose ranking has changed is the current
"asym_prefer_cpu", it scans the CPUs of the sched groups to find the new
"asym_prefer_cpu" and sets it accordingly.

get_group() for non-overlapping sched domains returns the sched group
for the first CPU in the sched_group_span() which ensures all CPUs in
the group see the updated value of "asym_prefer_cpu".

Overlapping groups are allocated differently and will require moving the
"asym_prefer_cpu" to "sg->sgc" but since the current implementations do
not set "SD_ASYM_PACKING" at NUMA domains, skip additional
indirection and place a SCHED_WARN_ON() to alert any future users.

Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 include/linux/sched/topology.h |  6 ++++
 kernel/sched/topology.c        | 58 ++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 7b4301b7235f..198bb5cc1774 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -195,6 +195,8 @@ struct sched_domain_topology_level {
 };
 
 extern void __init set_sched_topology(struct sched_domain_topology_level *tl);
+extern void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio);
+
 
 # define SD_INIT_NAME(type)		.name = #type
 
@@ -223,6 +225,10 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
 	return true;
 }
 
+static inline void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
+{
+}
+
 #endif	/* !CONFIG_SMP */
 
 #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index bbc2fc2c7c22..a2a38e1b6f18 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1333,6 +1333,64 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
 	update_group_capacity(sd, cpu);
 }
 
+#ifdef CONFIG_SMP
+
+/* Update the "asym_prefer_cpu" when arch_asym_cpu_priority() changes. */
+void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
+{
+	int asym_prefer_cpu = cpu;
+	struct sched_domain *sd;
+
+	guard(rcu)();
+
+	for_each_domain(cpu, sd) {
+		struct sched_group *sg;
+		int group_cpu;
+
+		if (!(sd->flags & SD_ASYM_PACKING))
+			continue;
+
+		/*
+		 * Groups of overlapping domain are replicated per NUMA
+		 * node and will require updating "asym_prefer_cpu" on
+		 * each local copy.
+		 *
+		 * If you are hitting this warning, consider moving
+		 * "sg->asym_prefer_cpu" to "sg->sgc->asym_prefer_cpu"
+		 * which is shared by all the overlapping groups.
+		 */
+		WARN_ON_ONCE(sd->flags & SD_OVERLAP);
+
+		sg = sd->groups;
+		if (cpu != sg->asym_prefer_cpu) {
+			/*
+			 * Since the parent is a superset of the current group,
+			 * if the cpu is not the "asym_prefer_cpu" at the
+			 * current level, it cannot be the preferred CPU at a
+			 * higher levels either.
+			 */
+			if (!sched_asym_prefer(cpu, sg->asym_prefer_cpu))
+				return;
+
+			WRITE_ONCE(sg->asym_prefer_cpu, cpu);
+			continue;
+		}
+
+		/* Ranking has improved; CPU is still the preferred one. */
+		if (new_prio >= old_prio)
+			continue;
+
+		for_each_cpu(group_cpu, sched_group_span(sg)) {
+			if (sched_asym_prefer(group_cpu, asym_prefer_cpu))
+				asym_prefer_cpu = group_cpu;
+		}
+
+		WRITE_ONCE(sg->asym_prefer_cpu, asym_prefer_cpu);
+	}
+}
+
+#endif /* CONFIG_SMP */
+
 /*
  * Set of available CPUs grouped by their corresponding capacities
  * Each list entry contains a CPU mask reflecting CPUs that share the same
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
  2025-04-09  5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
  2025-04-09  5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
  2025-04-09  5:34 ` [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu() K Prateek Nayak
@ 2025-04-09  5:34 ` K Prateek Nayak
  2025-04-09 19:15   ` Mario Limonciello
  2025-04-09  5:34 ` [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu K Prateek Nayak
  2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
  4 siblings, 1 reply; 8+ messages in thread
From: K Prateek Nayak @ 2025-04-09  5:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
	Viresh Kumar, linux-pm, linux-kernel
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak

A subset of AMD systems supporting Preferred Core rankings can have
their rankings changed dynamically at runtime. Update the
"sg->asym_prefer_cpu" across the local hierarchy of CPU when the
preferred core ranking changes.

Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 drivers/cpufreq/amd-pstate.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 6789eed1bb5b..8796217ccc60 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -844,8 +844,10 @@ static void amd_pstate_update_limits(unsigned int cpu)
 	if (highest_perf_changed) {
 		WRITE_ONCE(cpudata->prefcore_ranking, cur_high);
 
-		if (cur_high < CPPC_MAX_PERF)
+		if (cur_high < CPPC_MAX_PERF) {
 			sched_set_itmt_core_prio((int)cur_high, cpu);
+			sched_update_asym_prefer_cpu(cpu, prev_high, cur_high);
+		}
 	}
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu
  2025-04-09  5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
                   ` (2 preceding siblings ...)
  2025-04-09  5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
@ 2025-04-09  5:34 ` K Prateek Nayak
  2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
  4 siblings, 0 replies; 8+ messages in thread
From: K Prateek Nayak @ 2025-04-09  5:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
	Viresh Kumar, linux-pm, linux-kernel
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak

Add a file to read local group's "asym_prefer_cpu" from debugfs. This
information was useful when debugging issues where "asym_prefer_cpu" was
incorrectly set to a CPU with a lower asym priority.

Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/debug.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 56ae54e0ce6a..557246880a7e 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -588,6 +588,10 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
 	debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
 	debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
 	debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
+
+	if (sd->flags & SD_ASYM_PACKING)
+		debugfs_create_u32("group_asym_prefer_cpu", 0444, parent,
+				   (u32 *)&sd->groups->asym_prefer_cpu);
 }
 
 void update_sched_domain_debugfs(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
  2025-04-09  5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
@ 2025-04-09 19:15   ` Mario Limonciello
  0 siblings, 0 replies; 8+ messages in thread
From: Mario Limonciello @ 2025-04-09 19:15 UTC (permalink / raw)
  To: K Prateek Nayak, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Gautham R. Shenoy, Rafael J. Wysocki,
	Viresh Kumar, linux-pm, linux-kernel
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan

On 4/9/2025 12:34 AM, K Prateek Nayak wrote:
> A subset of AMD systems supporting Preferred Core rankings can have
> their rankings changed dynamically at runtime. Update the
> "sg->asym_prefer_cpu" across the local hierarchy of CPU when the
> preferred core ranking changes.
> 
> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Acked-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
>   drivers/cpufreq/amd-pstate.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 6789eed1bb5b..8796217ccc60 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -844,8 +844,10 @@ static void amd_pstate_update_limits(unsigned int cpu)
>   	if (highest_perf_changed) {
>   		WRITE_ONCE(cpudata->prefcore_ranking, cur_high);
>   
> -		if (cur_high < CPPC_MAX_PERF)
> +		if (cur_high < CPPC_MAX_PERF) {
>   			sched_set_itmt_core_prio((int)cur_high, cpu);
> +			sched_update_asym_prefer_cpu(cpu, prev_high, cur_high);
> +		}
>   	}
>   }
>   


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/4] sched/fair: Dynamic asym priority support
  2025-04-09  5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
                   ` (3 preceding siblings ...)
  2025-04-09  5:34 ` [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu K Prateek Nayak
@ 2025-04-10 10:52 ` Peter Zijlstra
  2025-04-10 15:40   ` K Prateek Nayak
  4 siblings, 1 reply; 8+ messages in thread
From: Peter Zijlstra @ 2025-04-10 10:52 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Gautham R. Shenoy,
	Mario Limonciello, Rafael J. Wysocki, Viresh Kumar, linux-pm,
	linux-kernel, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan

On Wed, Apr 09, 2025 at 05:34:42AM +0000, K Prateek Nayak wrote:
> K Prateek Nayak (4):
>   sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
>   sched/topology: Introduce sched_update_asym_prefer_cpu()
>   cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
>   sched/debug: Print the local group's asym_prefer_cpu
> 
>  drivers/cpufreq/amd-pstate.c   |  4 ++-
>  include/linux/sched/topology.h |  6 ++++
>  kernel/sched/debug.c           |  4 +++
>  kernel/sched/fair.c            |  5 +--
>  kernel/sched/topology.c        | 58 ++++++++++++++++++++++++++++++++++
>  5 files changed, 74 insertions(+), 3 deletions(-)

This seems reasonable. I'll queue it up, and unless someone (robot or
real person) objects, we'll get it merged :-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 0/4] sched/fair: Dynamic asym priority support
  2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
@ 2025-04-10 15:40   ` K Prateek Nayak
  0 siblings, 0 replies; 8+ messages in thread
From: K Prateek Nayak @ 2025-04-10 15:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Gautham R. Shenoy,
	Mario Limonciello, Rafael J. Wysocki, Viresh Kumar, linux-pm,
	linux-kernel, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Waiman Long, Swapnil Sapkal,
	Dhananjay Ugwekar, Huang Rui, Perry Yuan

Hello Peter,

On 4/10/2025 4:22 PM, Peter Zijlstra wrote:
> On Wed, Apr 09, 2025 at 05:34:42AM +0000, K Prateek Nayak wrote:
>> K Prateek Nayak (4):
>>    sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
>>    sched/topology: Introduce sched_update_asym_prefer_cpu()
>>    cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
>>    sched/debug: Print the local group's asym_prefer_cpu
>>
>>   drivers/cpufreq/amd-pstate.c   |  4 ++-
>>   include/linux/sched/topology.h |  6 ++++
>>   kernel/sched/debug.c           |  4 +++
>>   kernel/sched/fair.c            |  5 +--
>>   kernel/sched/topology.c        | 58 ++++++++++++++++++++++++++++++++++
>>   5 files changed, 74 insertions(+), 3 deletions(-)
> 
> This seems reasonable. I'll queue it up, and unless someone (robot or
> real person) objects, we'll get it merged :-)

Thank you! I'll be ready with a fire extinguisher but hopefully I won't
need it :)

-- 
Thanks and Regards,
Prateek


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-04-10 15:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-09  5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
2025-04-09  5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
2025-04-09  5:34 ` [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu() K Prateek Nayak
2025-04-09  5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
2025-04-09 19:15   ` Mario Limonciello
2025-04-09  5:34 ` [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu K Prateek Nayak
2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
2025-04-10 15:40   ` K Prateek Nayak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).