From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Lynch <nathanl@linux.ibm.com>,
Mark Rutland <mark.rutland@arm.com>,
Peter Zijlstra <peterz@infradead.org>,
ndesaulniers@google.com, linux-kernel@vger.kernel.org,
Nicholas Piggin <npiggin@gmail.com>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
Josh Poimboeuf <jpoimboe@kernel.org>
Subject: Re: [PATCH] powerpc/smp: Dynamically build powerpc topology
Date: Fri, 20 Oct 2023 18:51:05 +0530 [thread overview]
Message-ID: <20231020132105.GN2194132@linux.vnet.ibm.com> (raw)
In-Reply-To: <874jil5wa8.fsf@mail.lhotse>
* Michael Ellerman <mpe@ellerman.id.au> [2023-10-20 23:10:55]:
> Srikar Dronamraju <srikar@linux.vnet.ibm.com> writes:
> > Currently there are four powerpc specific sched topologies. These are
> > all statically defined. However not all these topologies are used by
> > all powerpc systems.
> >
> > To avoid unnecessary degenerations by the scheduler , masks and flags
> > are compared. However if the sched topologies are build dynamically then
> > the code is simpler and there are greater chances of avoiding
> > degenerations.
> >
> > Even x86 builds its sched topologies dynamically and new changes are
> > very similar to the way x86 is building its topologies.
> >
> > System Configuration
> > type=Shared mode=Uncapped smt=8 lcpu=128 mem=1063126592 kB cpus=96 ent=40.00
> >
> > $ lscpu
> > Architecture: ppc64le
> > Byte Order: Little Endian
> > CPU(s): 1024
> > On-line CPU(s) list: 0-1023
> > Model name: POWER10 (architected), altivec supported
> > Model: 2.0 (pvr 0080 0200)
> > Thread(s) per core: 8
> > Core(s) per socket: 32
> > Socket(s): 4
> > Hypervisor vendor: pHyp
> > Virtualization type: para
> > L1d cache: 8 MiB (256 instances)
> > L1i cache: 12 MiB (256 instances)
> > NUMA node(s): 4
> >
> > From dmesg of v6.5
> > [ 0.174444] smp: Bringing up secondary CPUs ...
> > [ 3.918535] smp: Brought up 4 nodes, 1024 CPUs
> > [ 38.001402] sysrq: Changing Loglevel
> > [ 38.001446] sysrq: Loglevel set to 9
> >
> > From dmesg of v6.5 + patch
> > [ 0.174462] smp: Bringing up secondary CPUs ...
> > [ 3.421462] smp: Brought up 4 nodes, 1024 CPUs
> > [ 35.417917] sysrq: Changing Loglevel
> > [ 35.417959] sysrq: Loglevel set to 9
> >
> > 5 runs of ppc64_cpu --smt=1 (time measured: lesser is better)
> > Kernel N Min Max Median Avg Stddev %Change
> > v6.5 5 518.08 574.27 528.61 535.388 22.341542
> > +patch 5 481.73 495.47 484.21 486.402 5.7997 -9.14963
> >
> > 5 runs of ppc64_cpu --smt=8 (time measured: lesser is better)
> > Kernel N Min Max Median Avg Stddev %Change
> > v6.5 5 1094.12 1117.1 1108.97 1106.3 8.606361
> > +patch 5 1067.5 1090.03 1073.89 1076.574 9.4189347 -2.68697
> >
> > Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
> > ---
> > arch/powerpc/kernel/smp.c | 78 ++++++++++++++-------------------------
> > 1 file changed, 28 insertions(+), 50 deletions(-)
> >
> > diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
> > index 48b8161179a8..c16443a04c26 100644
> > --- a/arch/powerpc/kernel/smp.c
> > +++ b/arch/powerpc/kernel/smp.c
> > @@ -92,15 +92,6 @@ EXPORT_PER_CPU_SYMBOL(cpu_l2_cache_map);
> > EXPORT_PER_CPU_SYMBOL(cpu_core_map);
> > EXPORT_SYMBOL_GPL(has_big_cores);
> >
> > -enum {
> > -#ifdef CONFIG_SCHED_SMT
> > - smt_idx,
> > -#endif
> > - cache_idx,
> > - mc_idx,
> > - die_idx,
> > -};
> > -
> > #define MAX_THREAD_LIST_SIZE 8
> > #define THREAD_GROUP_SHARE_L1 1
> > #define THREAD_GROUP_SHARE_L2_L3 2
> > @@ -1048,16 +1039,6 @@ static const struct cpumask *cpu_mc_mask(int cpu)
> > return cpu_coregroup_mask(cpu);
> > }
> >
> > -static struct sched_domain_topology_level powerpc_topology[] = {
> > -#ifdef CONFIG_SCHED_SMT
> > - { cpu_smt_mask, powerpc_smt_flags, SD_INIT_NAME(SMT) },
> > -#endif
> > - { shared_cache_mask, powerpc_shared_cache_flags, SD_INIT_NAME(CACHE) },
> > - { cpu_mc_mask, powerpc_shared_proc_flags, SD_INIT_NAME(MC) },
> > - { cpu_cpu_mask, powerpc_shared_proc_flags, SD_INIT_NAME(DIE) },
> > - { NULL, },
> > -};
>
> This doesn't apply on my next or upstream.
>
> It looks like it depends on your other 6-patch series. Please append
> this patch to that series.
>
> cheers
Ok, will do the needful in the next iteration.
--
Thanks and Regards
Srikar Dronamraju
prev parent reply other threads:[~2023-10-20 13:22 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-30 12:26 [PATCH] powerpc/smp: Dynamically build powerpc topology Srikar Dronamraju
2023-09-04 22:10 ` Peter Zijlstra
2023-09-05 5:37 ` Srikar Dronamraju
2023-10-20 12:10 ` Michael Ellerman
2023-10-20 13:21 ` Srikar Dronamraju [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231020132105.GN2194132@linux.vnet.ibm.com \
--to=srikar@linux.vnet.ibm.com \
--cc=jpoimboe@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mark.rutland@arm.com \
--cc=mpe@ellerman.id.au \
--cc=nathanl@linux.ibm.com \
--cc=ndesaulniers@google.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).