From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Mark Rutland <mark.rutland@arm.com>,
Valentin Schneider <vschneid@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
"Paul E. McKenney" <paulmck@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
"ndesaulniers@google.com" <ndesaulniers@google.com>,
linux-kernel@vger.kernel.org,
Rohan McLure <rmclure@linux.ibm.com>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
Josh Poimboeuf <jpoimboe@kernel.org>
Subject: [PATCH v4 1/5] powerpc/smp: Enable Asym packing for cores on shared processor
Date: Thu, 9 Nov 2023 11:19:29 +0530 [thread overview]
Message-ID: <20231109054938.26589-2-srikar@linux.vnet.ibm.com> (raw)
In-Reply-To: <20231109054938.26589-1-srikar@linux.vnet.ibm.com>
If there are shared processor LPARs, underlying Hypervisor can have more
virtual cores to handle than actual physical cores.
Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
independent thread groups. On a shared processors LPARs, it helps to
pack threads to lesser number of cores so that the overall system
performance and utilization improves. PowerVM schedules at a big core
level. Hence packing to fewer cores helps.
For example: Lets says there are two 8-core Shared LPARs that are
actually sharing a 8 Core shared physical pool, each running 8 threads
each. Then Consolidating 8 threads to 4 cores on each LPAR would help
them to perform better. This is because each of the LPAR will get
100% time to run applications and there will no switching required by
the Hypervisor.
To achieve this, enable SD_ASYM_PACKING flag at CACHE, MC and DIE level
when the system is running in shared processor mode and has big cores.
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
Changelog:
v3 -> v4:
- Dont use splpar_asym_pack with SMT
- Conflict resolution due to rebase
(DIE changed to PKG)
v2 -> v3:
- Handle comments from Michael Ellerman.
- Rework using existing cpu_has_features static key
v1->v2: Using Jump label instead of a variable.
arch/powerpc/kernel/smp.c | 37 +++++++++++++++++++++++++++++--------
1 file changed, 29 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index ab691c89d787..69a3262024f1 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -993,16 +993,20 @@ static bool shared_caches;
/* cpumask of CPUs with asymmetric SMT dependency */
static int powerpc_smt_flags(void)
{
- int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;
+ if (!cpu_has_feature(CPU_FTR_ASYM_SMT))
+ return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;
- if (cpu_has_feature(CPU_FTR_ASYM_SMT)) {
- printk_once(KERN_INFO "Enabling Asymmetric SMT scheduling\n");
- flags |= SD_ASYM_PACKING;
- }
- return flags;
+ return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES | SD_ASYM_PACKING;
}
#endif
+/*
+ * On shared processor LPARs scheduled on a big core (which has two or more
+ * independent thread groups per core), prefer lower numbered CPUs, so
+ * that workload consolidates to lesser number of cores.
+ */
+static __ro_after_init DEFINE_STATIC_KEY_FALSE(splpar_asym_pack);
+
/*
* P9 has a slightly odd architecture where pairs of cores share an L2 cache.
* This topology makes it *much* cheaper to migrate tasks between adjacent cores
@@ -1011,9 +1015,20 @@ static int powerpc_smt_flags(void)
*/
static int powerpc_shared_cache_flags(void)
{
+ if (static_branch_unlikely(&splpar_asym_pack))
+ return SD_SHARE_PKG_RESOURCES | SD_ASYM_PACKING;
+
return SD_SHARE_PKG_RESOURCES;
}
+static int powerpc_shared_proc_flags(void)
+{
+ if (static_branch_unlikely(&splpar_asym_pack))
+ return SD_ASYM_PACKING;
+
+ return 0;
+}
+
/*
* We can't just pass cpu_l2_cache_mask() directly because
* returns a non-const pointer and the compiler barfs on that.
@@ -1050,8 +1065,8 @@ static struct sched_domain_topology_level powerpc_topology[] = {
{ cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
#endif
{ shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) },
- { cpu_mc_mask, SD_INIT_NAME(MC) },
- { cpu_cpu_mask, SD_INIT_NAME(PKG) },
+ { cpu_mc_mask, powerpc_shared_proc_flags, SD_INIT_NAME(MC) },
+ { cpu_cpu_mask, powerpc_shared_proc_flags, SD_INIT_NAME(PKG) },
{ NULL, },
};
@@ -1686,7 +1701,13 @@ static void __init fixup_topology(void)
{
int i;
+ if (is_shared_processor() && has_big_cores)
+ static_branch_enable(&splpar_asym_pack);
+
#ifdef CONFIG_SCHED_SMT
+ if (cpu_has_feature(CPU_FTR_ASYM_SMT))
+ pr_info_once("Enabling Asymmetric SMT scheduling\n");
+
if (has_big_cores) {
pr_info("Big cores detected but using small core scheduling\n");
powerpc_topology[smt_idx].mask = smallcore_smt_mask;
--
2.31.1
next prev parent reply other threads:[~2023-11-09 5:52 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-09 5:49 [PATCH v4 0/5] powerpc/smp: Topology and shared processor optimizations Srikar Dronamraju
2023-11-09 5:49 ` Srikar Dronamraju [this message]
2023-11-15 5:27 ` [PATCH v4 1/5] powerpc/smp: Enable Asym packing for cores on shared processor Aneesh Kumar K.V
2023-11-15 5:42 ` Srikar Dronamraju
2023-11-15 6:35 ` Aneesh Kumar K.V
2023-11-15 11:35 ` Srikar Dronamraju
2023-11-09 5:49 ` [PATCH v4 2/5] powerpc/smp: Disable MC domain for " Srikar Dronamraju
2023-11-09 5:49 ` [PATCH v4 3/5] powerpc/smp: Add __ro_after_init attribute Srikar Dronamraju
2023-11-09 5:49 ` [PATCH v4 4/5] powerpc/smp: Avoid asym packing within thread_group of a core Srikar Dronamraju
2023-11-09 5:49 ` [PATCH v4 5/5] powerpc/smp: Dynamically build Powerpc topology Srikar Dronamraju
2023-11-15 5:54 ` [PATCH v4 0/5] powerpc/smp: Topology and shared processor optimizations Aneesh Kumar K.V
2023-11-15 6:16 ` Srikar Dronamraju
2023-12-11 2:56 ` Srikar Dronamraju
2023-12-11 10:45 ` Michael Ellerman
2023-12-13 11:20 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231109054938.26589-2-srikar@linux.vnet.ibm.com \
--to=srikar@linux.vnet.ibm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=jpoimboe@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mark.rutland@arm.com \
--cc=mpe@ellerman.id.au \
--cc=ndesaulniers@google.com \
--cc=npiggin@gmail.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rmclure@linux.ibm.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).