From: Gavin Shan <gshan@redhat.com>
To: "Philippe Mathieu-Daudé" <philmd@linaro.org>, qemu-devel@nongnu.org
Cc: qemu-arm@nongnu.org, peter.maydell@linaro.org, yihyu@redhat.com,
shan.gavin@gmail.com, Igor Mammedov <imammedo@redhat.com>
Subject: Re: [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes
Date: Wed, 22 Feb 2023 10:38:56 +1100 [thread overview]
Message-ID: <5cade5f0-edbd-5d9a-c6d5-e2cb2f73b3b5@redhat.com> (raw)
In-Reply-To: <d49eb549-9a8f-ad31-9836-cfd7dac78be6@linaro.org>
On 2/22/23 10:31 AM, Philippe Mathieu-Daudé wrote:
> On 22/2/23 00:12, Gavin Shan wrote:
>> On 2/21/23 9:21 PM, Philippe Mathieu-Daudé wrote:
>>> On 21/2/23 10:21, Gavin Shan wrote:
>>>> On 2/21/23 8:15 PM, Philippe Mathieu-Daudé wrote:
>>>>> On 21/2/23 09:53, Gavin Shan wrote:
>>>>>> Linux kernel guest reports warning when two CPUs in one socket have
>>>>>> been associated with different NUMA nodes, using the following command
>>>>>> lines.
>>>>>>
>>>>>> -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>>>>>> -numa node,nodeid=0,cpus=0-1,memdev=ram0 \
>>>>>> -numa node,nodeid=1,cpus=2-3,memdev=ram1 \
>>>>>> -numa node,nodeid=2,cpus=4-5,memdev=ram2 \
>>>>>>
>>>>>> ------------[ cut here ]------------
>>>>>> WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
>>>>>> Modules linked in:
>>>>>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
>>>>>> pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>>>>>> pc : build_sched_domains+0x284/0x910
>>>>>> lr : build_sched_domains+0x184/0x910
>>>>>> sp : ffff80000804bd50
>>>>>> x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
>>>>>> x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
>>>>>> x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
>>>>>> x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
>>>>>> x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
>>>>>> x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
>>>>>> x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
>>>>>> x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
>>>>>> x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
>>>>>> x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
>>>>>> Call trace:
>>>>>> build_sched_domains+0x284/0x910
>>>>>> sched_init_domains+0xac/0xe0
>>>>>> sched_init_smp+0x48/0xc8
>>>>>> kernel_init_freeable+0x140/0x1ac
>>>>>> kernel_init+0x28/0x140
>>>>>> ret_from_fork+0x10/0x20
>>>>>>
>>>>>> Fix it by preventing mutiple CPUs in one socket to be associated with
>>>>>> different NUMA nodes.
>>>>>>
>>>>>> Reported-by: Yihuang Yu <yihyu@redhat.com>
>>>>>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>>>>>> ---
>>>>>> hw/arm/virt.c | 37 +++++++++++++++++++++++++++++++++++++
>>>>>> 1 file changed, 37 insertions(+)
>>>>>>
>>>>>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>>>>>> index ac626b3bef..e0af267c77 100644
>>>>>> --- a/hw/arm/virt.c
>>>>>> +++ b/hw/arm/virt.c
>>>>>> @@ -230,6 +230,39 @@ static bool cpu_type_valid(const char *cpu)
>>>>>> return false;
>>>>>> }
>>>>>> +static bool numa_state_valid(MachineState *ms)
>>>>>> +{
>>>>>> + MachineClass *mc = MACHINE_GET_CLASS(ms);
>>>>>> + NumaState *state = ms->numa_state;
>>>>>> + const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
>>>>>> + const CPUArchId *cpus = possible_cpus->cpus;
>>>>>> + int len = possible_cpus->len, i, j;
>>>>>> +
>>>>>> + if (!state || state->num_nodes <= 1 || len <= 1) {
>>>>>> + return true;
>>>>>> + }
>>>>>> +
>>>>>> + for (i = 0; i < len; i++) {
>>>>>> + for (j = i + 1; j < len; j++) {
>>>>>> + if (cpus[i].props.has_socket_id &&
>>>>>> + cpus[i].props.has_node_id &&
>>>>>> + cpus[j].props.has_socket_id &&
>>>>>> + cpus[j].props.has_node_id &&
>>>>>> + cpus[i].props.socket_id == cpus[j].props.socket_id &&
>>>>>> + cpus[i].props.node_id != cpus[j].props.node_id) {
>>>>>> + error_report("CPU-%d and CPU-%d in socket-%ld have been "
>>>>>> + "associated with node-%ld and node-%ld",
>>>>>> + i, j, cpus[i].props.socket_id,
>>>>>> + cpus[i].props.node_id,
>>>>>> + cpus[j].props.node_id);
>>>>>> + return false;
>>>>>> + }
>>>>>> + }
>>>>>> + }
>>>>>> +
>>>>>> + return true;
>>>>>> +}
>>>>>> +
>>>>>> static void create_randomness(MachineState *ms, const char *node)
>>>>>> {
>>>>>> struct {
>>>>>> @@ -2040,6 +2073,10 @@ static void machvirt_init(MachineState *machine)
>>>>>> exit(1);
>>>>>> }
>>>>>> + if (!numa_state_valid(machine)) {
>>>>>> + exit(1);
>>>>>> + }
>>>>>
>>>>> Why restrict to the virt machine?
>>>>>
>>>>
>>>> We tried x86 machines and virt machine, but the issue isn't reproducible on x86 machines.
>>>> So I think it's machine or architecture specific issue. However, I believe RiscV should
>>>> have similar issue because linux/drivers/base/arch_topology.c is shared by ARM64 and RiscV.
>>>> x86 doesn't use the driver to populate its CPU topology.
>>>
>>> Oh, I haven't thought about the other archs, I meant this seem a generic
>>> issue which affects all (ARM) machines, so why restrict to the (ARM)
>>> virt machine?
>>>
>>
>> [Ccing Igor for comments]
>>
>> Well, virt machine is the only concern to us for now. You're right that all ARM64 and ARM machines
>> need this check and limitation. So the check needs to be done in the generic path. The best way
>> I can figure out is like something below. The idea is to introduce a switch to 'struct NumaState'
>> and do the check in the generic path. The switch is turned on by individual machines. Please let me
>> know if you have better ideas
>
> Can't this be done generically in machine_numa_finish_cpu_init()
> -> numa_validate_initiator()?
>
Yes, machine_numa_finish_cpu_init() is better place, before numa_validate_initiator().
>> - Add 'bool struct NumaState::has_strict_socket_mapping', which is 'false' by default until
>> machine specific initialization function calls helper set_numa_strict_socket_mapping(), for
>> example in hw/arm/virt.c::virt_instance_init().
>>
>> - In numa_complete_configuration(), do the check to make sure the socket doesn't cross over
>> the NUMA node boundary if 'bool struct NumaState::has_strict_socket_mapping' is true.
>>
Thanks,
Gavin
prev parent reply other threads:[~2023-02-21 23:39 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-21 8:53 [PATCH] hw/arm/virt: Prevent CPUs in one socket to span mutiple NUMA nodes Gavin Shan
2023-02-21 9:15 ` Philippe Mathieu-Daudé
2023-02-21 9:21 ` Gavin Shan
2023-02-21 10:21 ` Philippe Mathieu-Daudé
2023-02-21 23:12 ` Gavin Shan
2023-02-21 23:31 ` Philippe Mathieu-Daudé
2023-02-21 23:38 ` Gavin Shan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5cade5f0-edbd-5d9a-c6d5-e2cb2f73b3b5@redhat.com \
--to=gshan@redhat.com \
--cc=imammedo@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=shan.gavin@gmail.com \
--cc=yihyu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).