qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Gavin Shan <gshan@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: qemu-arm@nongnu.org, qemu-devel@nongnu.org,
	qemu-riscv@nongnu.org, rad@semihalf.com,
	peter.maydell@linaro.org, quic_llindhol@quicinc.com,
	eduardo@habkost.net, marcel.apfelbaum@gmail.com,
	philmd@linaro.org, wangyanan55@huawei.com, palmer@dabbelt.com,
	alistair.francis@wdc.com, bin.meng@windriver.com,
	thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com,
	imammedo@redhat.com, yihyu@redhat.com, shan.gavin@gmail.com
Subject: Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines
Date: Fri, 24 Feb 2023 16:47:15 +1100	[thread overview]
Message-ID: <2a541e96-fe04-0cd5-3f28-6eb69aff3b91@redhat.com> (raw)
In-Reply-To: <Y/disinKmr6gLby1@redhat.com>

On 2/23/23 11:57 PM, Daniel P. Berrangé wrote:
> On Thu, Feb 23, 2023 at 04:13:57PM +0800, Gavin Shan wrote:
>> For arm64 and RiscV architecture, the driver (/base/arch_topology.c) is
>> used to populate the CPU topology in the Linux guest. It's required that
>> the CPUs in one socket can't span mutiple NUMA nodes. Otherwise, the Linux
>> scheduling domain can't be sorted out, as the following warning message
>> indicates. To avoid the unexpected confusion, this series attempts to
>> rejects such kind of insane configurations.
>>
>>     -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>>     -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
>>     -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
>>     -numa node,nodeid=2,cpus=4-5,memdev=ram2                \
> 
> This is somewhat odd as a config, because core 2 is in socket 0
> and core 3 is in socket 1, so it wouldn't make much conceptual
> sense to have them in the same NUMA node, while other cores within
> the same socket are in different NUMA nodes. Basically the split
> of NUMA nodes is not aligned with any level in the topology.
> 
> This series, however, also rejects configurations that I would
> normally consider to be reasonable. I've not tested linux kernel
> behaviour though, but as a user I would expect to be able to
> ask for:
> 
>      -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>      -numa node,nodeid=0,cpus=0,memdev=ram0                \
>      -numa node,nodeid=1,cpus=1,memdev=ram1                \
>      -numa node,nodeid=2,cpus=2,memdev=ram2                \
>      -numa node,nodeid=3,cpus=3,memdev=ram3                \
>      -numa node,nodeid=4,cpus=4,memdev=ram4                \
>      -numa node,nodeid=5,cpus=5,memdev=ram5                \
> 
> ie, every core gets its own NUMA node
> 

It doesn't work to Linux guest either. As the following warning message
indicates, the Multicore domain isn't a subset of DIE (CLUSTER or socket)
domain. For example, Multicore domain is 0-2 while DIE domain is 0 for
CPU-0.

[    0.023486] CPU-0: 36,56,0,-1 thread=0  core=0-2  cluster=0-2 llc=0    // parsed from ACPI PPTT
[    0.023490] CPU-1: 36,56,1,-1 thread=1  core=0-2  cluster=0-2 llc=1
[    0.023492] CPU-2: 36,56,2,-1 thread=2  core=0-2  cluster=0-2 llc=2
[    0.023494] CPU-3: 136,156,3,-1 thread=3  core=3-5  cluster=3-5 llc=3
[    0.023495] CPU-4: 136,156,4,-1 thread=4  core=3-5  cluster=3-5 llc=4
[    0.023497] CPU-5: 136,156,5,-1 thread=5  core=3-5  cluster=3-5 llc=5
[    0.023499] CPU-0: SMT=0  CLUSTER=0  MULTICORE=0-2  DIE=0  CPU-OF-NODE=0      // Seen by scheduling domain
[    0.023501] CPU-1: SMT=1  CLUSTER=1  MULTICORE=0-2  DIE=1  CPU-OF-NODE=1
[    0.023503] CPU-2: SMT=2  CLUSTER=2  MULTICORE=0-2  DIE=2  CPU-OF-NODE=2
[    0.023504] CPU-3: SMT=3  CLUSTER=3  MULTICORE=3-5  DIE=3  CPU-OF-NODE=3
[    0.023506] CPU-4: SMT=4  CLUSTER=4  MULTICORE=3-5  DIE=4  CPU-OF-NODE=4
[    0.023508] CPU-5: SMT=5  CLUSTER=5  MULTICORE=3-5  DIE=5  CPU-OF_NODE=5
         :
[    0.023555] BUG: arch topology borken
[    0.023556]      the MC domain not a subset of the DIE domain

NOTE that both DIE and CPU-OF-NODE are same since they're all returned by
'cpumask_of_node(cpu_to_node(cpu))'.


> Or to aask for every cluster as a numa node:
> 
>      -smp 6,maxcpus=6,sockets=2,clusters=3,cores=1,threads=1 \
>      -numa node,nodeid=0,cpus=0,memdev=ram0                \
>      -numa node,nodeid=1,cpus=1,memdev=ram1                \
>      -numa node,nodeid=2,cpus=2,memdev=ram2                \
>      -numa node,nodeid=3,cpus=3,memdev=ram3                \
>      -numa node,nodeid=4,cpus=4,memdev=ram4                \
>      -numa node,nodeid=5,cpus=5,memdev=ram5                \
> 

This case works fine to Linux guest.

[    0.024505] CPU-0: 36,56,0,-1 thread=0  core=0-2  cluster=0 llc=0            // parsed from ACPI PPTT
[    0.024509] CPU-1: 36,96,1,-1 thread=1  core=0-2  cluster=1 llc=1
[    0.024511] CPU-2: 36,136,2,-1 thread=2  core=0-2  cluster=2 llc=2
[    0.024512] CPU-3: 176,196,3,-1 thread=3  core=3-5  cluster=3 llc=3
[    0.024514] CPU-4: 176,236,4,-1 thread=4  core=3-5  cluster=4 llc=4
[    0.024515] CPU-5: 176,276,5,-1 thread=5  core=3-5  cluster=5 llc=5
[    0.024518] CPU-0: SMT=0  CLUSTER=0  MULTICORE=0  DIE=0  CPU-OF-NODE=0      // Seen by scheduling domain
[    0.024519] CPU-1: SMT=1  CLUSTER=1  MULTICORE=1  DIE=1  CPU-OF-NODE=1
[    0.024521] CPU-2: SMT=2  CLUSTER=2  MULTICORE=2  DIE=2  CPU-OF-NODE=2
[    0.024522] CPU-3: SMT=3  CLUSTER=3  MULTICORE=3  DIE=3  CPU-OF-NODE=3
[    0.024524] CPU-4: SMT=4  CLUSTER=4  MULTICORE=4  DIE=4  CPU-OF-NODE=4
[    0.024525] CPU-5: SMT=5  CLUSTER=5  MULTICORE=5  DIE=5  CPU-OF-NODE=5


> In both cases the NUMA split is aligned with a given level
> in the topology, which was not the case with your example.
> 
> Rejecting these feels overly strict to me, and may risk breaking
> existing valid deployments, unless we can demonstrate those
> scenarios were unambiguously already broken ?
> 
> If there was something in the hardware specs that requires
> this, then it is more valid to do, than if it is merely an
> specific guest kernel limitation that might be fixed any day.
> 

Yes, I agree that it's strict to have socket-to-NUMA boundary. However,
it sounds not sensible to split CPUs in one cluster to differnet NUMA
nodes, or to split CPUs in one core to different NUMA nodes in the baremetal
environment. I think we probably need to prevent these two cases, meaning two
clusters in one socket is still allowed to be associated with different NUMA
nodes.

I fail to get accurate information about the relation among socket/cluster/core
from specs. As I can understand, the CPUs in one core are sharing L2 cache and
cores in one cluster are sharing L3 cache. thread would have its own L1 cache.
L3 cache is usually corresponding to NUMA node. I may be totally wrong here.

Thanks,
Gavin






  reply	other threads:[~2023-02-24  5:51 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-23  8:13 [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines Gavin Shan
2023-02-23  8:13 ` [PATCH v2 1/4] qtest/numa-test: Follow socket-NUMA-node boundary for aarch64 Gavin Shan
2023-02-23  8:13 ` [PATCH v2 2/4] numa: Validate socket and NUMA node boundary if required Gavin Shan
2023-02-23  9:05   ` Philippe Mathieu-Daudé
2023-02-23 10:27     ` Gavin Shan
2023-02-23  8:14 ` [PATCH v2 3/4] hw/arm: Validate socket and NUMA node boundary Gavin Shan
2023-02-23  8:14 ` [PATCH v2 4/4] hw/riscv: " Gavin Shan
2023-02-23 12:25 ` [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines Andrew Jones
2023-02-24  7:20   ` Gavin Shan
2023-02-23 12:57 ` Daniel P. Berrangé
2023-02-24  5:47   ` Gavin Shan [this message]
2023-02-23 13:18 ` Daniel Henrique Barboza
2023-02-24  7:09   ` Gavin Shan
2023-02-24  9:26     ` Daniel Henrique Barboza
2023-02-24 10:16       ` Gavin Shan
2023-02-24 10:39         ` Andrew Jones
2023-02-24 14:20         ` Igor Mammedov
2023-02-25  0:05           ` Gavin Shan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a541e96-fe04-0cd5-3f28-6eb69aff3b91@redhat.com \
    --to=gshan@redhat.com \
    --cc=alistair.francis@wdc.com \
    --cc=berrange@redhat.com \
    --cc=bin.meng@windriver.com \
    --cc=eduardo@habkost.net \
    --cc=imammedo@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=philmd@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-riscv@nongnu.org \
    --cc=quic_llindhol@quicinc.com \
    --cc=rad@semihalf.com \
    --cc=shan.gavin@gmail.com \
    --cc=thuth@redhat.com \
    --cc=wangyanan55@huawei.com \
    --cc=yihyu@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).