linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Subject: [PATCH 0/3] Reintroduce cpu_core_mask
Date: Thu, 15 Apr 2021 17:39:31 +0530	[thread overview]
Message-ID: <20210415120934.232271-1-srikar@linux.vnet.ibm.com> (raw)

Daniel had reported that
 QEMU is now unable to see requested topologies in a multi socket single
 NUMA node configurations.
 -smp 8,maxcpus=8,cores=2,threads=2,sockets=2

This patchset reintroduces cpu_core_mask so that users can see requested
topologies while still maintaining the boot time of very large system
configurations.

It includes caching the chip_id as suggested by Michael Ellermann

4 Threads/Core; 4 cores/Socket; 4 Sockets/Node, 2 Nodes in System
  -numa node,nodeid=0,memdev=m0 \
  -numa node,nodeid=1,memdev=m1 \
  -smp 128,sockets=8,threads=4,maxcpus=128  \

5.12.0-rc5 (or any kernel with commit 4ca234a9cbd7)
---------------------------------------------------
srikar@cloudy:~$ lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Thread(s) per core:              4
Core(s) per socket:              16
Socket(s):                       2                 <<<<<-----
NUMA node(s):                    2
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
Hypervisor vendor:               KVM
Virtualization type:             para
L1d cache:                       1 MiB
L1i cache:                       1 MiB
NUMA node0 CPU(s):               0-15,32-47,64-79,96-111
NUMA node1 CPU(s):               16-31,48-63,80-95,112-127
--
srikar@cloudy:~$ dmesg |grep smp
[    0.010658] smp: Bringing up secondary CPUs ...
[    0.424681] smp: Brought up 2 nodes, 128 CPUs
--

5.12.0-rc5 + 3 patches
----------------------
srikar@cloudy:~$ lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Thread(s) per core:              4
Core(s) per socket:              4
Socket(s):                       8    <<<<-----
NUMA node(s):                    2
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
Hypervisor vendor:               KVM
Virtualization type:             para
L1d cache:                       1 MiB
L1i cache:                       1 MiB
NUMA node0 CPU(s):               0-15,32-47,64-79,96-111
NUMA node1 CPU(s):               16-31,48-63,80-95,112-127
--
srikar@cloudy:~$ dmesg |grep smp
[    0.010372] smp: Bringing up secondary CPUs ...
[    0.417892] smp: Brought up 2 nodes, 128 CPUs

5.12.0-rc5
----------
srikar@cloudy:~$  lscpu
Architecture:                    ppc64le
Byte Order:                      Little Endian
CPU(s):                          1024
On-line CPU(s) list:             0-1023
Thread(s) per core:              8
Core(s) per socket:              128
Socket(s):                       1
NUMA node(s):                    1
Model:                           2.3 (pvr 004e 1203)
Model name:                      POWER9 (architected), altivec supported
Hypervisor vendor:               KVM
Virtualization type:             para
L1d cache:                       4 MiB
L1i cache:                       4 MiB
NUMA node0 CPU(s):               0-1023
srikar@cloudy:~$ dmesg | grep smp
[    0.027753 ] smp: Bringing up secondary CPUs ...
[    2.315193 ] smp: Brought up 1 node, 1024 CPUs

5.12.0-rc5 + 3 patches
----------------------
srikar@cloudy:~$ dmesg | grep smp
[    0.027659 ] smp: Bringing up secondary CPUs ...
[    2.532739 ] smp: Brought up 1 node, 1024 CPUs

I also have booted and tested the kernels on PowerVM and PowerNV and
even there I see a very negligible increase in the bringing up time of
secondary CPUs

Srikar Dronamraju (3):
  powerpc/smp: Reintroduce cpu_core_mask
  Revert "powerpc/topology: Update topology_core_cpumask"
  powerpc/smp: Cache CPU to chip lookup

 arch/powerpc/include/asm/smp.h      |  6 ++++
 arch/powerpc/include/asm/topology.h |  2 +-
 arch/powerpc/kernel/prom.c          | 19 +++++++---
 arch/powerpc/kernel/smp.c           | 56 +++++++++++++++++++++++++----
 4 files changed, 71 insertions(+), 12 deletions(-)

-- 
2.25.1


             reply	other threads:[~2021-04-15 12:10 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-15 12:09 Srikar Dronamraju [this message]
2021-04-15 12:09 ` [PATCH 1/3] powerpc/smp: Reintroduce cpu_core_mask Srikar Dronamraju
2021-04-15 17:11   ` Gautham R Shenoy
2021-04-15 17:36     ` Srikar Dronamraju
2021-04-16  3:21   ` David Gibson
2021-04-16  5:45     ` Srikar Dronamraju
2021-04-19  1:17       ` David Gibson
2021-04-15 12:09 ` [PATCH 2/3] Revert "powerpc/topology: Update topology_core_cpumask" Srikar Dronamraju
2021-04-15 12:09 ` [PATCH 3/3] powerpc/smp: Cache CPU to chip lookup Srikar Dronamraju
2021-04-15 17:19   ` Gautham R Shenoy
2021-04-15 17:51     ` Srikar Dronamraju
2021-04-16 15:57       ` Gautham R Shenoy
2021-04-16 16:57         ` Srikar Dronamraju
2021-04-19  1:19         ` David Gibson
2021-04-15 12:17 ` [PATCH 0/3] Reintroduce cpu_core_mask Daniel Henrique Barboza
2021-04-19  4:00 ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210415120934.232271-1-srikar@linux.vnet.ibm.com \
    --to=srikar@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).