* [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
2025-04-09 5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
@ 2025-04-09 5:34 ` K Prateek Nayak
2025-04-16 19:16 ` [tip: sched/core] " tip-bot2 for K Prateek Nayak
2025-04-09 5:34 ` [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu() K Prateek Nayak
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: K Prateek Nayak @ 2025-04-09 5:34 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
Viresh Kumar, linux-pm, linux-kernel
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak
Subsequent commits add the support to dynamically update the sched_group
struct's "asym_prefer_cpu" member from a remote CPU. Use READ_ONCE()
when reading the "sg->asym_prefer_cpu" to ensure load balancer always
reads the latest value.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0c19459c8042..5e1bd9e8464c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10251,7 +10251,7 @@ sched_group_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group
(sgs->group_weight - sgs->idle_cpus != 1))
return false;
- return sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);
+ return sched_asym(env->sd, env->dst_cpu, READ_ONCE(group->asym_prefer_cpu));
}
/* One group has more than one SMT CPU while the other group does not */
@@ -10488,7 +10488,8 @@ static bool update_sd_pick_busiest(struct lb_env *env,
case group_asym_packing:
/* Prefer to move from lowest priority CPU's work */
- return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
+ return sched_asym_prefer(READ_ONCE(sds->busiest->asym_prefer_cpu),
+ READ_ONCE(sg->asym_prefer_cpu));
case group_misfit_task:
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [tip: sched/core] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
2025-04-09 5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
@ 2025-04-16 19:16 ` tip-bot2 for K Prateek Nayak
0 siblings, 0 replies; 12+ messages in thread
From: tip-bot2 for K Prateek Nayak @ 2025-04-16 19:16 UTC (permalink / raw)
To: linux-tip-commits
Cc: K Prateek Nayak, Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 872aa4de18889be63317a8c0f2de71a3a01e487c
Gitweb: https://git.kernel.org/tip/872aa4de18889be63317a8c0f2de71a3a01e487c
Author: K Prateek Nayak <kprateek.nayak@amd.com>
AuthorDate: Wed, 09 Apr 2025 05:34:43
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Wed, 16 Apr 2025 21:09:11 +02:00
sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
Subsequent commits add the support to dynamically update the sched_group
struct's "asym_prefer_cpu" member from a remote CPU. Use READ_ONCE()
when reading the "sg->asym_prefer_cpu" to ensure load balancer always
reads the latest value.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250409053446.23367-2-kprateek.nayak@amd.com
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0c19459..5e1bd9e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10251,7 +10251,7 @@ sched_group_asym(struct lb_env *env, struct sg_lb_stats *sgs, struct sched_group
(sgs->group_weight - sgs->idle_cpus != 1))
return false;
- return sched_asym(env->sd, env->dst_cpu, group->asym_prefer_cpu);
+ return sched_asym(env->sd, env->dst_cpu, READ_ONCE(group->asym_prefer_cpu));
}
/* One group has more than one SMT CPU while the other group does not */
@@ -10488,7 +10488,8 @@ static bool update_sd_pick_busiest(struct lb_env *env,
case group_asym_packing:
/* Prefer to move from lowest priority CPU's work */
- return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
+ return sched_asym_prefer(READ_ONCE(sds->busiest->asym_prefer_cpu),
+ READ_ONCE(sg->asym_prefer_cpu));
case group_misfit_task:
/*
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu()
2025-04-09 5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
2025-04-09 5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
@ 2025-04-09 5:34 ` K Prateek Nayak
2025-04-16 19:16 ` [tip: sched/core] " tip-bot2 for K Prateek Nayak
2025-04-09 5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: K Prateek Nayak @ 2025-04-09 5:34 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
Viresh Kumar, linux-pm, linux-kernel
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak
A subset of AMD Processors supporting Preferred Core Rankings also
feature the ability to dynamically switch these rankings at runtime to
bias load balancing towards or away from the LLC domain with larger
cache.
To support dynamically updating "sg->asym_prefer_cpu" without needing to
rebuild the sched domain, introduce sched_update_asym_prefer_cpu() which
recomutes the "asym_prefer_cpu" when the core-ranking of a CPU changes.
sched_update_asym_prefer_cpu() swaps the "sg->asym_prefer_cpu" with the
CPU whose ranking has changed if the new ranking is greater than that of
the "asym_prefer_cpu". If CPU whose ranking has changed is the current
"asym_prefer_cpu", it scans the CPUs of the sched groups to find the new
"asym_prefer_cpu" and sets it accordingly.
get_group() for non-overlapping sched domains returns the sched group
for the first CPU in the sched_group_span() which ensures all CPUs in
the group see the updated value of "asym_prefer_cpu".
Overlapping groups are allocated differently and will require moving the
"asym_prefer_cpu" to "sg->sgc" but since the current implementations do
not set "SD_ASYM_PACKING" at NUMA domains, skip additional
indirection and place a SCHED_WARN_ON() to alert any future users.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
include/linux/sched/topology.h | 6 ++++
kernel/sched/topology.c | 58 ++++++++++++++++++++++++++++++++++
2 files changed, 64 insertions(+)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 7b4301b7235f..198bb5cc1774 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -195,6 +195,8 @@ struct sched_domain_topology_level {
};
extern void __init set_sched_topology(struct sched_domain_topology_level *tl);
+extern void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio);
+
# define SD_INIT_NAME(type) .name = #type
@@ -223,6 +225,10 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
return true;
}
+static inline void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
+{
+}
+
#endif /* !CONFIG_SMP */
#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index bbc2fc2c7c22..a2a38e1b6f18 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1333,6 +1333,64 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
update_group_capacity(sd, cpu);
}
+#ifdef CONFIG_SMP
+
+/* Update the "asym_prefer_cpu" when arch_asym_cpu_priority() changes. */
+void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
+{
+ int asym_prefer_cpu = cpu;
+ struct sched_domain *sd;
+
+ guard(rcu)();
+
+ for_each_domain(cpu, sd) {
+ struct sched_group *sg;
+ int group_cpu;
+
+ if (!(sd->flags & SD_ASYM_PACKING))
+ continue;
+
+ /*
+ * Groups of overlapping domain are replicated per NUMA
+ * node and will require updating "asym_prefer_cpu" on
+ * each local copy.
+ *
+ * If you are hitting this warning, consider moving
+ * "sg->asym_prefer_cpu" to "sg->sgc->asym_prefer_cpu"
+ * which is shared by all the overlapping groups.
+ */
+ WARN_ON_ONCE(sd->flags & SD_OVERLAP);
+
+ sg = sd->groups;
+ if (cpu != sg->asym_prefer_cpu) {
+ /*
+ * Since the parent is a superset of the current group,
+ * if the cpu is not the "asym_prefer_cpu" at the
+ * current level, it cannot be the preferred CPU at a
+ * higher levels either.
+ */
+ if (!sched_asym_prefer(cpu, sg->asym_prefer_cpu))
+ return;
+
+ WRITE_ONCE(sg->asym_prefer_cpu, cpu);
+ continue;
+ }
+
+ /* Ranking has improved; CPU is still the preferred one. */
+ if (new_prio >= old_prio)
+ continue;
+
+ for_each_cpu(group_cpu, sched_group_span(sg)) {
+ if (sched_asym_prefer(group_cpu, asym_prefer_cpu))
+ asym_prefer_cpu = group_cpu;
+ }
+
+ WRITE_ONCE(sg->asym_prefer_cpu, asym_prefer_cpu);
+ }
+}
+
+#endif /* CONFIG_SMP */
+
/*
* Set of available CPUs grouped by their corresponding capacities
* Each list entry contains a CPU mask reflecting CPUs that share the same
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [tip: sched/core] sched/topology: Introduce sched_update_asym_prefer_cpu()
2025-04-09 5:34 ` [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu() K Prateek Nayak
@ 2025-04-16 19:16 ` tip-bot2 for K Prateek Nayak
0 siblings, 0 replies; 12+ messages in thread
From: tip-bot2 for K Prateek Nayak @ 2025-04-16 19:16 UTC (permalink / raw)
To: linux-tip-commits
Cc: K Prateek Nayak, Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 0e3f6c3696424fa90d6f512779d617a05a1cf031
Gitweb: https://git.kernel.org/tip/0e3f6c3696424fa90d6f512779d617a05a1cf031
Author: K Prateek Nayak <kprateek.nayak@amd.com>
AuthorDate: Wed, 09 Apr 2025 05:34:44
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Wed, 16 Apr 2025 21:09:11 +02:00
sched/topology: Introduce sched_update_asym_prefer_cpu()
A subset of AMD Processors supporting Preferred Core Rankings also
feature the ability to dynamically switch these rankings at runtime to
bias load balancing towards or away from the LLC domain with larger
cache.
To support dynamically updating "sg->asym_prefer_cpu" without needing to
rebuild the sched domain, introduce sched_update_asym_prefer_cpu() which
recomutes the "asym_prefer_cpu" when the core-ranking of a CPU changes.
sched_update_asym_prefer_cpu() swaps the "sg->asym_prefer_cpu" with the
CPU whose ranking has changed if the new ranking is greater than that of
the "asym_prefer_cpu". If CPU whose ranking has changed is the current
"asym_prefer_cpu", it scans the CPUs of the sched groups to find the new
"asym_prefer_cpu" and sets it accordingly.
get_group() for non-overlapping sched domains returns the sched group
for the first CPU in the sched_group_span() which ensures all CPUs in
the group see the updated value of "asym_prefer_cpu".
Overlapping groups are allocated differently and will require moving the
"asym_prefer_cpu" to "sg->sgc" but since the current implementations do
not set "SD_ASYM_PACKING" at NUMA domains, skip additional
indirection and place a SCHED_WARN_ON() to alert any future users.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250409053446.23367-3-kprateek.nayak@amd.com
---
include/linux/sched/topology.h | 6 +++-
kernel/sched/topology.c | 58 +++++++++++++++++++++++++++++++++-
2 files changed, 64 insertions(+)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 7b4301b..198bb5c 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -195,6 +195,8 @@ struct sched_domain_topology_level {
};
extern void __init set_sched_topology(struct sched_domain_topology_level *tl);
+extern void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio);
+
# define SD_INIT_NAME(type) .name = #type
@@ -223,6 +225,10 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
return true;
}
+static inline void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
+{
+}
+
#endif /* !CONFIG_SMP */
#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index bbc2fc2..a2a38e1 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1333,6 +1333,64 @@ next:
update_group_capacity(sd, cpu);
}
+#ifdef CONFIG_SMP
+
+/* Update the "asym_prefer_cpu" when arch_asym_cpu_priority() changes. */
+void sched_update_asym_prefer_cpu(int cpu, int old_prio, int new_prio)
+{
+ int asym_prefer_cpu = cpu;
+ struct sched_domain *sd;
+
+ guard(rcu)();
+
+ for_each_domain(cpu, sd) {
+ struct sched_group *sg;
+ int group_cpu;
+
+ if (!(sd->flags & SD_ASYM_PACKING))
+ continue;
+
+ /*
+ * Groups of overlapping domain are replicated per NUMA
+ * node and will require updating "asym_prefer_cpu" on
+ * each local copy.
+ *
+ * If you are hitting this warning, consider moving
+ * "sg->asym_prefer_cpu" to "sg->sgc->asym_prefer_cpu"
+ * which is shared by all the overlapping groups.
+ */
+ WARN_ON_ONCE(sd->flags & SD_OVERLAP);
+
+ sg = sd->groups;
+ if (cpu != sg->asym_prefer_cpu) {
+ /*
+ * Since the parent is a superset of the current group,
+ * if the cpu is not the "asym_prefer_cpu" at the
+ * current level, it cannot be the preferred CPU at a
+ * higher levels either.
+ */
+ if (!sched_asym_prefer(cpu, sg->asym_prefer_cpu))
+ return;
+
+ WRITE_ONCE(sg->asym_prefer_cpu, cpu);
+ continue;
+ }
+
+ /* Ranking has improved; CPU is still the preferred one. */
+ if (new_prio >= old_prio)
+ continue;
+
+ for_each_cpu(group_cpu, sched_group_span(sg)) {
+ if (sched_asym_prefer(group_cpu, asym_prefer_cpu))
+ asym_prefer_cpu = group_cpu;
+ }
+
+ WRITE_ONCE(sg->asym_prefer_cpu, asym_prefer_cpu);
+ }
+}
+
+#endif /* CONFIG_SMP */
+
/*
* Set of available CPUs grouped by their corresponding capacities
* Each list entry contains a CPU mask reflecting CPUs that share the same
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
2025-04-09 5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
2025-04-09 5:34 ` [PATCH v2 1/4] sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu K Prateek Nayak
2025-04-09 5:34 ` [PATCH v2 2/4] sched/topology: Introduce sched_update_asym_prefer_cpu() K Prateek Nayak
@ 2025-04-09 5:34 ` K Prateek Nayak
2025-04-09 19:15 ` Mario Limonciello
2025-04-16 19:16 ` [tip: sched/core] " tip-bot2 for K Prateek Nayak
2025-04-09 5:34 ` [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu K Prateek Nayak
2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
4 siblings, 2 replies; 12+ messages in thread
From: K Prateek Nayak @ 2025-04-09 5:34 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
Viresh Kumar, linux-pm, linux-kernel
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak
A subset of AMD systems supporting Preferred Core rankings can have
their rankings changed dynamically at runtime. Update the
"sg->asym_prefer_cpu" across the local hierarchy of CPU when the
preferred core ranking changes.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
drivers/cpufreq/amd-pstate.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 6789eed1bb5b..8796217ccc60 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -844,8 +844,10 @@ static void amd_pstate_update_limits(unsigned int cpu)
if (highest_perf_changed) {
WRITE_ONCE(cpudata->prefcore_ranking, cur_high);
- if (cur_high < CPPC_MAX_PERF)
+ if (cur_high < CPPC_MAX_PERF) {
sched_set_itmt_core_prio((int)cur_high, cpu);
+ sched_update_asym_prefer_cpu(cpu, prev_high, cur_high);
+ }
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
2025-04-09 5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
@ 2025-04-09 19:15 ` Mario Limonciello
2025-04-16 19:16 ` [tip: sched/core] " tip-bot2 for K Prateek Nayak
1 sibling, 0 replies; 12+ messages in thread
From: Mario Limonciello @ 2025-04-09 19:15 UTC (permalink / raw)
To: K Prateek Nayak, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Gautham R. Shenoy, Rafael J. Wysocki,
Viresh Kumar, linux-pm, linux-kernel
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan
On 4/9/2025 12:34 AM, K Prateek Nayak wrote:
> A subset of AMD systems supporting Preferred Core rankings can have
> their rankings changed dynamically at runtime. Update the
> "sg->asym_prefer_cpu" across the local hierarchy of CPU when the
> preferred core ranking changes.
>
> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Acked-by: Mario Limonciello <mario.limonciello@amd.com>
> ---
> drivers/cpufreq/amd-pstate.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 6789eed1bb5b..8796217ccc60 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -844,8 +844,10 @@ static void amd_pstate_update_limits(unsigned int cpu)
> if (highest_perf_changed) {
> WRITE_ONCE(cpudata->prefcore_ranking, cur_high);
>
> - if (cur_high < CPPC_MAX_PERF)
> + if (cur_high < CPPC_MAX_PERF) {
> sched_set_itmt_core_prio((int)cur_high, cpu);
> + sched_update_asym_prefer_cpu(cpu, prev_high, cur_high);
> + }
> }
> }
>
^ permalink raw reply [flat|nested] 12+ messages in thread* [tip: sched/core] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
2025-04-09 5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
2025-04-09 19:15 ` Mario Limonciello
@ 2025-04-16 19:16 ` tip-bot2 for K Prateek Nayak
1 sibling, 0 replies; 12+ messages in thread
From: tip-bot2 for K Prateek Nayak @ 2025-04-16 19:16 UTC (permalink / raw)
To: linux-tip-commits
Cc: K Prateek Nayak, Peter Zijlstra (Intel), Mario Limonciello, x86,
linux-kernel
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 8157fbc907452aa5674df2de23c1c7305c907006
Gitweb: https://git.kernel.org/tip/8157fbc907452aa5674df2de23c1c7305c907006
Author: K Prateek Nayak <kprateek.nayak@amd.com>
AuthorDate: Wed, 09 Apr 2025 05:34:45
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Wed, 16 Apr 2025 21:09:11 +02:00
cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
A subset of AMD systems supporting Preferred Core rankings can have
their rankings changed dynamically at runtime. Update the
"sg->asym_prefer_cpu" across the local hierarchy of CPU when the
preferred core ranking changes.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mario Limonciello <mario.limonciello@amd.com>
Link: https://lore.kernel.org/r/20250409053446.23367-4-kprateek.nayak@amd.com
---
drivers/cpufreq/amd-pstate.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 6789eed..8796217 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -844,8 +844,10 @@ static void amd_pstate_update_limits(unsigned int cpu)
if (highest_perf_changed) {
WRITE_ONCE(cpudata->prefcore_ranking, cur_high);
- if (cur_high < CPPC_MAX_PERF)
+ if (cur_high < CPPC_MAX_PERF) {
sched_set_itmt_core_prio((int)cur_high, cpu);
+ sched_update_asym_prefer_cpu(cpu, prev_high, cur_high);
+ }
}
}
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu
2025-04-09 5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
` (2 preceding siblings ...)
2025-04-09 5:34 ` [PATCH v2 3/4] cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change K Prateek Nayak
@ 2025-04-09 5:34 ` K Prateek Nayak
2025-04-16 19:16 ` [tip: sched/core] " tip-bot2 for K Prateek Nayak
2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
4 siblings, 1 reply; 12+ messages in thread
From: K Prateek Nayak @ 2025-04-09 5:34 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Gautham R. Shenoy, Mario Limonciello, Rafael J. Wysocki,
Viresh Kumar, linux-pm, linux-kernel
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan, K Prateek Nayak
Add a file to read local group's "asym_prefer_cpu" from debugfs. This
information was useful when debugging issues where "asym_prefer_cpu" was
incorrectly set to a CPU with a lower asym priority.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
kernel/sched/debug.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 56ae54e0ce6a..557246880a7e 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -588,6 +588,10 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
+
+ if (sd->flags & SD_ASYM_PACKING)
+ debugfs_create_u32("group_asym_prefer_cpu", 0444, parent,
+ (u32 *)&sd->groups->asym_prefer_cpu);
}
void update_sched_domain_debugfs(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* [tip: sched/core] sched/debug: Print the local group's asym_prefer_cpu
2025-04-09 5:34 ` [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu K Prateek Nayak
@ 2025-04-16 19:16 ` tip-bot2 for K Prateek Nayak
0 siblings, 0 replies; 12+ messages in thread
From: tip-bot2 for K Prateek Nayak @ 2025-04-16 19:16 UTC (permalink / raw)
To: linux-tip-commits
Cc: K Prateek Nayak, Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 44671e21e3463f36f6c6e4b691216f60e85840e4
Gitweb: https://git.kernel.org/tip/44671e21e3463f36f6c6e4b691216f60e85840e4
Author: K Prateek Nayak <kprateek.nayak@amd.com>
AuthorDate: Wed, 09 Apr 2025 05:34:46
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Wed, 16 Apr 2025 21:09:11 +02:00
sched/debug: Print the local group's asym_prefer_cpu
Add a file to read local group's "asym_prefer_cpu" from debugfs. This
information was useful when debugging issues where "asym_prefer_cpu" was
incorrectly set to a CPU with a lower asym priority.
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20250409053446.23367-5-kprateek.nayak@amd.com
---
kernel/sched/debug.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 56ae54e..5572468 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -588,6 +588,10 @@ static void register_sd(struct sched_domain *sd, struct dentry *parent)
debugfs_create_file("flags", 0444, parent, &sd->flags, &sd_flags_fops);
debugfs_create_file("groups_flags", 0444, parent, &sd->groups->flags, &sd_flags_fops);
debugfs_create_u32("level", 0444, parent, (u32 *)&sd->level);
+
+ if (sd->flags & SD_ASYM_PACKING)
+ debugfs_create_u32("group_asym_prefer_cpu", 0444, parent,
+ (u32 *)&sd->groups->asym_prefer_cpu);
}
void update_sched_domain_debugfs(void)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2 0/4] sched/fair: Dynamic asym priority support
2025-04-09 5:34 [PATCH v2 0/4] sched/fair: Dynamic asym priority support K Prateek Nayak
` (3 preceding siblings ...)
2025-04-09 5:34 ` [PATCH v2 4/4] sched/debug: Print the local group's asym_prefer_cpu K Prateek Nayak
@ 2025-04-10 10:52 ` Peter Zijlstra
2025-04-10 15:40 ` K Prateek Nayak
4 siblings, 1 reply; 12+ messages in thread
From: Peter Zijlstra @ 2025-04-10 10:52 UTC (permalink / raw)
To: K Prateek Nayak
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Gautham R. Shenoy,
Mario Limonciello, Rafael J. Wysocki, Viresh Kumar, linux-pm,
linux-kernel, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan
On Wed, Apr 09, 2025 at 05:34:42AM +0000, K Prateek Nayak wrote:
> K Prateek Nayak (4):
> sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
> sched/topology: Introduce sched_update_asym_prefer_cpu()
> cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
> sched/debug: Print the local group's asym_prefer_cpu
>
> drivers/cpufreq/amd-pstate.c | 4 ++-
> include/linux/sched/topology.h | 6 ++++
> kernel/sched/debug.c | 4 +++
> kernel/sched/fair.c | 5 +--
> kernel/sched/topology.c | 58 ++++++++++++++++++++++++++++++++++
> 5 files changed, 74 insertions(+), 3 deletions(-)
This seems reasonable. I'll queue it up, and unless someone (robot or
real person) objects, we'll get it merged :-)
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH v2 0/4] sched/fair: Dynamic asym priority support
2025-04-10 10:52 ` [PATCH v2 0/4] sched/fair: Dynamic asym priority support Peter Zijlstra
@ 2025-04-10 15:40 ` K Prateek Nayak
0 siblings, 0 replies; 12+ messages in thread
From: K Prateek Nayak @ 2025-04-10 15:40 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Gautham R. Shenoy,
Mario Limonciello, Rafael J. Wysocki, Viresh Kumar, linux-pm,
linux-kernel, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Waiman Long, Swapnil Sapkal,
Dhananjay Ugwekar, Huang Rui, Perry Yuan
Hello Peter,
On 4/10/2025 4:22 PM, Peter Zijlstra wrote:
> On Wed, Apr 09, 2025 at 05:34:42AM +0000, K Prateek Nayak wrote:
>> K Prateek Nayak (4):
>> sched/fair: Use READ_ONCE() to read sg->asym_prefer_cpu
>> sched/topology: Introduce sched_update_asym_prefer_cpu()
>> cpufreq/amd-pstate: Update asym_prefer_cpu when core rankings change
>> sched/debug: Print the local group's asym_prefer_cpu
>>
>> drivers/cpufreq/amd-pstate.c | 4 ++-
>> include/linux/sched/topology.h | 6 ++++
>> kernel/sched/debug.c | 4 +++
>> kernel/sched/fair.c | 5 +--
>> kernel/sched/topology.c | 58 ++++++++++++++++++++++++++++++++++
>> 5 files changed, 74 insertions(+), 3 deletions(-)
>
> This seems reasonable. I'll queue it up, and unless someone (robot or
> real person) objects, we'll get it merged :-)
Thank you! I'll be ready with a fire extinguisher but hopefully I won't
need it :)
--
Thanks and Regards,
Prateek
^ permalink raw reply [flat|nested] 12+ messages in thread