From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 42GNpn3d1gzF3Ms for ; Fri, 21 Sep 2018 03:22:57 +1000 (AEST) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w8KHELGo115338 for ; Thu, 20 Sep 2018 13:22:54 -0400 Received: from e35.co.us.ibm.com (e35.co.us.ibm.com [32.97.110.153]) by mx0b-001b2d01.pphosted.com with ESMTP id 2mmdqw673g-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 20 Sep 2018 13:22:53 -0400 Received: from localhost by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 20 Sep 2018 11:22:52 -0600 From: "Gautham R. Shenoy" To: "Aneesh Kumar K.V" , Srikar Dronamraju , Michael Ellerman , Benjamin Herrenschmidt , Michael Neuling , Vaidyanathan Srinivasan , Akshay Adiga , Shilpasri G Bhat , "Oliver O'Halloran" , Nicholas Piggin , Murilo Opsfelder Araujo , Anton Blanchard Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, "Gautham R. Shenoy" Subject: [PATCH v8 2/3] powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores Date: Thu, 20 Sep 2018 22:52:38 +0530 In-Reply-To: <1537464159-25919-1-git-send-email-ego@linux.vnet.ibm.com> References: <1537464159-25919-1-git-send-email-ego@linux.vnet.ibm.com> Message-Id: <1537464159-25919-3-git-send-email-ego@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: "Gautham R. Shenoy" Each of the SMT4 cores forming a big-core are more or less independent units. Thus when multiple tasks are scheduled to run on the big-core, we get the best performance when the tasks are spread across the pair of SMT4 cores. This patch achieves this by setting the SMT level mask to correspond to the smallcore sibling mask on big-core systems. This patch also the CACHE level sched-domain corresponding to the big-core is created on big-core systems. With this patch, the SMT sched-domain with SMT=8,4,2 on big-core systems are as follows: 1) ppc64_cpu --smt=8 CPU0 attaching sched-domain(s): domain-0: span=0,2,4,6 level=SMT groups: 0:{ span=0 cap=294 }, 2:{ span=2 cap=294 }, 4:{ span=4 cap=294 }, 6:{ span=6 cap=294 } CPU1 attaching sched-domain(s): domain-0: span=1,3,5,7 level=SMT groups: 1:{ span=1 cap=294 }, 3:{ span=3 cap=294 }, 5:{ span=5 cap=294 }, 7:{ span=7 cap=294 } 2) ppc64_cpu --smt=4 CPU0 attaching sched-domain(s): domain-0: span=0,2 level=SMT groups: 0:{ span=0 cap=589 }, 2:{ span=2 cap=589 } CPU1 attaching sched-domain(s): domain-0: span=1,3 level=SMT groups: 1:{ span=1 cap=589 }, 3:{ span=3 cap=589 } 3) ppc64_cpu --smt=2 SMT domain is a trivial domain consisting of just 1 CPU. Hence this domain gets collapsed leaving only CACHE, DIE and NUMA domains. Signed-off-by: Gautham R. Shenoy --- arch/powerpc/kernel/smp.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 15095110..5cdcf44 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -1265,6 +1265,7 @@ static void add_cpu_to_masks(int cpu) void start_secondary(void *unused) { unsigned int cpu = smp_processor_id(); + struct cpumask *(*sibling_mask)(int) = cpu_sibling_mask; mmgrab(&init_mm); current->active_mm = &init_mm; @@ -1290,11 +1291,13 @@ void start_secondary(void *unused) /* Update topology CPU masks */ add_cpu_to_masks(cpu); + if (has_big_cores) + sibling_mask = cpu_smallcore_mask; /* * Check for any shared caches. Note that this must be done on a * per-core basis because one core in the pair might be disabled. */ - if (!cpumask_equal(cpu_l2_cache_mask(cpu), cpu_sibling_mask(cpu))) + if (!cpumask_equal(cpu_l2_cache_mask(cpu), sibling_mask(cpu))) shared_caches = true; set_numa_node(numa_cpu_lookup_table[cpu]); @@ -1361,6 +1364,13 @@ static const struct cpumask *shared_cache_mask(int cpu) return cpu_l2_cache_mask(cpu); } +#ifdef CONFIG_SCHED_SMT +static const struct cpumask *smallcore_smt_mask(int cpu) +{ + return cpu_smallcore_mask(cpu); +} +#endif + static struct sched_domain_topology_level power9_topology[] = { #ifdef CONFIG_SCHED_SMT { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) }, @@ -1388,6 +1398,13 @@ void __init smp_cpus_done(unsigned int max_cpus) shared_proc_topology_init(); dump_numa_cpu_topology(); +#ifdef CONFIG_SCHED_SMT + if (has_big_cores) { + pr_info("Using small cores at SMT level\n"); + power9_topology[0].mask = smallcore_smt_mask; + powerpc_topology[0].mask = smallcore_smt_mask; + } +#endif /* * If any CPU detects that it's sharing a cache with another CPU then * use the deeper topology that is aware of this sharing. -- 1.9.4