From: Gavin Shan <gshan@redhat.com>
To: "Philippe Mathieu-Daudé" <philmd@linaro.org>, qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org, peter.maydell@linaro.org, yihyu@redhat.com,
shan.gavin@gmail.com
Subject: Re: [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes
Date: Tue, 21 Feb 2023 20:21:02 +1100 [thread overview]
Message-ID: <3e88a2ec-6425-f484-8483-560d511a27ca@redhat.com> (raw)
In-Reply-To: <78d887c3-0241-9552-69b2-bd2e9a8fb74b@linaro.org>
On 2/21/23 8:15 PM, Philippe Mathieu-Daudé wrote:
> On 21/2/23 09:53, Gavin Shan wrote:
>> Linux kernel guest reports warning when two CPUs in one socket have
>> been associated with different NUMA nodes, using the following command
>> lines.
>>
>> -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>> -numa node,nodeid=0,cpus=0-1,memdev=ram0 \
>> -numa node,nodeid=1,cpus=2-3,memdev=ram1 \
>> -numa node,nodeid=2,cpus=4-5,memdev=ram2 \
>>
>> ------------[ cut here ]------------
>> WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
>> Modules linked in:
>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
>> pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>> pc : build_sched_domains+0x284/0x910
>> lr : build_sched_domains+0x184/0x910
>> sp : ffff80000804bd50
>> x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
>> x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
>> x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
>> x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
>> x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
>> x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
>> x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
>> x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
>> x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
>> x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
>> Call trace:
>> build_sched_domains+0x284/0x910
>> sched_init_domains+0xac/0xe0
>> sched_init_smp+0x48/0xc8
>> kernel_init_freeable+0x140/0x1ac
>> kernel_init+0x28/0x140
>> ret_from_fork+0x10/0x20
>>
>> Fix it by preventing mutiple CPUs in one socket to be associated with
>> different NUMA nodes.
>>
>> Reported-by: Yihuang Yu <yihyu@redhat.com>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>> hw/arm/virt.c | 37 +++++++++++++++++++++++++++++++++++++
>> 1 file changed, 37 insertions(+)
>>
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index ac626b3bef..e0af267c77 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -230,6 +230,39 @@ static bool cpu_type_valid(const char *cpu)
>> return false;
>> }
>> +static bool numa_state_valid(MachineState *ms)
>> +{
>> + MachineClass *mc = MACHINE_GET_CLASS(ms);
>> + NumaState *state = ms->numa_state;
>> + const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
>> + const CPUArchId *cpus = possible_cpus->cpus;
>> + int len = possible_cpus->len, i, j;
>> +
>> + if (!state || state->num_nodes <= 1 || len <= 1) {
>> + return true;
>> + }
>> +
>> + for (i = 0; i < len; i++) {
>> + for (j = i + 1; j < len; j++) {
>> + if (cpus[i].props.has_socket_id &&
>> + cpus[i].props.has_node_id &&
>> + cpus[j].props.has_socket_id &&
>> + cpus[j].props.has_node_id &&
>> + cpus[i].props.socket_id == cpus[j].props.socket_id &&
>> + cpus[i].props.node_id != cpus[j].props.node_id) {
>> + error_report("CPU-%d and CPU-%d in socket-%ld have been "
>> + "associated with node-%ld and node-%ld",
>> + i, j, cpus[i].props.socket_id,
>> + cpus[i].props.node_id,
>> + cpus[j].props.node_id);
>> + return false;
>> + }
>> + }
>> + }
>> +
>> + return true;
>> +}
>> +
>> static void create_randomness(MachineState *ms, const char *node)
>> {
>> struct {
>> @@ -2040,6 +2073,10 @@ static void machvirt_init(MachineState *machine)
>> exit(1);
>> }
>> + if (!numa_state_valid(machine)) {
>> + exit(1);
>> + }
>
> Why restrict to the virt machine?
>
We tried x86 machines and virt machine, but the issue isn't reproducible on x86 machines.
So I think it's machine or architecture specific issue. However, I believe RiscV should
have similar issue because linux/drivers/base/arch_topology.c is shared by ARM64 and RiscV.
x86 doesn't use the driver to populate its CPU topology.
Thanks,
Gavin
next prev parent reply other threads:[~2023-02-21 9:21 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-21 8:53 [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes Gavin Shan
2023-02-21 9:15 ` Philippe Mathieu-Daudé
2023-02-21 9:21 ` Gavin Shan [this message]
2023-02-21 10:21 ` Philippe Mathieu-Daudé
2023-02-21 23:12 ` Gavin Shan
2023-02-21 23:31 ` Philippe Mathieu-Daudé
2023-02-21 23:38 ` Gavin Shan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3e88a2ec-6425-f484-8483-560d511a27ca@redhat.com \
--to=gshan@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=shan.gavin@gmail.com \
--cc=yihyu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).