From: Igor Mammedov <imammedo@redhat.com>
To: Gavin Shan <gshan@redhat.com>
Cc: qemu-arm@nongnu.org, qemu-devel@nongnu.org, mst@redhat.com,
ani@anisinha.ca, drjones@redhat.com, wangyanan55@huawei.com,
Jonathan.Cameron@Huawei.com, peter.maydell@linaro.org,
berrange@redhat.com, thuth@redhat.com, eduardo@habkost.net,
lvivier@redhat.com, zhenyzha@redhat.com, shan.gavin@gmail.com
Subject: Re: [PATCH 2/3] hw/acpi/aml-build: Fix {socket, cluster, core} IDs in PPTT
Date: Thu, 26 May 2022 14:25:12 +0200 [thread overview]
Message-ID: <20220526142512.32129b2e@redhat.com> (raw)
In-Reply-To: <20220518092141.1050852-3-gshan@redhat.com>
On Wed, 18 May 2022 17:21:40 +0800
Gavin Shan <gshan@redhat.com> wrote:
> The {socket, cluster, core} IDs detected from Linux guest aren't
> matching with what have been provided in PPTT. The flag used for
> 'ACPI Processor ID valid' is missed for {socket, cluster, core}
> nodes.
To permit this flag set on no leaf nodes we have to have
a corresponding containers built for them in DSDT so that
'ACPI Processor ID' could be matched with containers '_UID's.
If we don not build such containers then setting this flag is
not correct. And I don't recall QEMU building CPU hierarchy
in DSDT.
> In this case, Linux guest takes the offset between the
> node and PPTT header as the corresponding IDs, as the following
> logs show.
perhaps it's kernel which should be fixed to handle
not set 'ACPI Processor ID valid' correctly.
>
> /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
> -accel kvm -machine virt,gic-version=host -cpu host \
> -smp 8,sockets=2,clusters=2,cores=2,threads=1
> :
>
> # cd /sys/devices/system/cpu
> # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
> 36 36 36 36 36 36 36 36
> 336 336 336 336 336 336 336 336
> # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
> 56 56 56 56 196 196 196 196
> 356 356 356 356 496 496 496 496
> # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
> 76 76 136 136 216 216 276 276
> 376 376 436 436 516 516 576 576
>
> This fixes the issue by setting 'ACPI Processor ID valid' flag for
> {socket, cluster, core} nodes. With this applied, the IDs are exactly
> what have been provided in PPTT.
>
> # for i in `seq 0 15`; do cat cpu$i/topology/physical_package_id; done
> 0 0 0 0 0 0 0 0
> 1 1 1 1 1 1 1 1
> # for i in `seq 0 15`; do cat cpu$i/topology/cluster_id; done
> 0 0 0 0 1 1 1 1
> 0 0 0 0 1 1 1 1
> # for i in `seq 0 15`; do cat cpu$i/topology/core_id; done
> 0 0 1 1 0 0 1 1
> 0 0 1 1 0 0 1 1
>
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
> hw/acpi/aml-build.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index e6bfac95c7..89f191fd3b 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -2026,7 +2026,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> core_id = -1;
> socket_offset = table_data->len - pptt_start;
> build_processor_hierarchy_node(table_data,
> - (1 << 0), /* Physical package */
> + (1 << 0) | /* Physical package */
> + (1 << 1), /* ACPI Processor ID valid */
> 0, socket_id, NULL, 0);
> }
>
> @@ -2037,7 +2038,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> core_id = -1;
> cluster_offset = table_data->len - pptt_start;
> build_processor_hierarchy_node(table_data,
> - (0 << 0), /* Not a physical package */
> + (0 << 0) | /* Not a physical package */
> + (1 << 1), /* ACPI Processor ID valid */
> socket_offset, cluster_id, NULL, 0);
> }
> } else {
> @@ -2055,7 +2057,8 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> core_id = cpus->cpus[n].props.core_id;
> core_offset = table_data->len - pptt_start;
> build_processor_hierarchy_node(table_data,
> - (0 << 0), /* Not a physical package */
> + (0 << 0) | /* Not a physical package */
> + (1 << 1), /* ACPI Processor ID valid */
> cluster_offset, core_id, NULL, 0);
> }
>
next prev parent reply other threads:[~2022-05-26 12:39 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-18 9:21 [PATCH 0/3] hw/acpi/aml-build: Fix {socket, cluster, core} IDs in PPTT Gavin Shan
2022-05-18 9:21 ` [PATCH 1/3] tests/acpi/virt: Allow PPTT ACPI table changes Gavin Shan
2022-05-18 9:21 ` [PATCH 2/3] hw/acpi/aml-build: Fix {socket, cluster, core} IDs in PPTT Gavin Shan
2022-05-26 12:25 ` Igor Mammedov [this message]
2022-05-26 14:40 ` Gavin Shan
2022-06-09 16:00 ` Igor Mammedov
2022-06-13 9:11 ` Gavin Shan
2022-05-18 9:21 ` [PATCH 3/3] tests/acpi/virt: Update PPTT ACPI table Gavin Shan
2022-05-18 15:42 ` [PATCH 0/3] hw/acpi/aml-build: Fix {socket, cluster, core} IDs in PPTT Andrew Jones
2022-05-26 11:37 ` Gavin Shan
2022-05-26 12:27 ` Igor Mammedov
2022-05-26 14:41 ` Gavin Shan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220526142512.32129b2e@redhat.com \
--to=imammedo@redhat.com \
--cc=Jonathan.Cameron@Huawei.com \
--cc=ani@anisinha.ca \
--cc=berrange@redhat.com \
--cc=drjones@redhat.com \
--cc=eduardo@habkost.net \
--cc=gshan@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=shan.gavin@gmail.com \
--cc=thuth@redhat.com \
--cc=wangyanan55@huawei.com \
--cc=zhenyzha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).