From: Igor Mammedov <imammedo@redhat.com>
To: Gavin Shan <gshan@redhat.com>
Cc: lvivier@redhat.com, eduardo@habkost.net, thuth@redhat.com,
berrange@redhat.com, shan.gavin@gmail.com,
peter.maydell@linaro.org, zhenyzha@redhat.com, mst@redhat.com,
qemu-devel@nongnu.org, armbru@redhat.com, wangyanan55@huawei.com,
qemu-arm@nongnu.org, marcel.apfelbaum@gmail.com,
Jonathan.Cameron@Huawei.com, ani@anisinha.ca,
pbonzini@redhat.com, drjones@redhat.com, eblake@redhat.com,
f4bug@amsat.org
Subject: Re: [PATCH v7 4/4] hw/acpi/aml-build: Use existing CPU topology to build PPTT table
Date: Wed, 20 Apr 2022 16:56:55 +0200 [thread overview]
Message-ID: <20220420165655.5e83b381@redhat.com> (raw)
In-Reply-To: <20220420104909.233058-5-gshan@redhat.com>
On Wed, 20 Apr 2022 18:49:09 +0800
Gavin Shan <gshan@redhat.com> wrote:
> When the PPTT table is built, the CPU topology is re-calculated, but
> it's unecessary because the CPU topology has been populated in
> virt_possible_cpu_arch_ids() on arm/virt machine.
>
> This reworks build_pptt() to avoid by reusing the existing IDs in
> ms->possible_cpus. Currently, the only user of build_pptt() is
> arm/virt machine.
>
> Signed-off-by: Gavin Shan <gshan@redhat.com>
Looks fine to me, so
Acked-by: Igor Mammedov <imammedo@redhat.com>
Also do we have a bios-tables tests that watches over/tests PPTT table?
if not please add one as a patch on top.
> ---
> hw/acpi/aml-build.c | 109 +++++++++++++++++++-------------------------
> 1 file changed, 47 insertions(+), 62 deletions(-)
>
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 4086879ebf..73f4e69c29 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -2002,86 +2002,71 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
> const char *oem_id, const char *oem_table_id)
> {
> MachineClass *mc = MACHINE_GET_CLASS(ms);
> - GQueue *list = g_queue_new();
> - guint pptt_start = table_data->len;
> - guint parent_offset;
> - guint length, i;
> - int uid = 0;
> - int socket;
> + CPUArchIdList *cpus = ms->possible_cpus;
> + int64_t socket_id = -1, cluster_id = -1, core_id = -1;
> + uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
> + uint32_t pptt_start = table_data->len;
> + int n;
> AcpiTable table = { .sig = "PPTT", .rev = 2,
> .oem_id = oem_id, .oem_table_id = oem_table_id };
>
> acpi_table_begin(&table, table_data);
>
> - for (socket = 0; socket < ms->smp.sockets; socket++) {
> - g_queue_push_tail(list,
> - GUINT_TO_POINTER(table_data->len - pptt_start));
> - build_processor_hierarchy_node(
> - table_data,
> - /*
> - * Physical package - represents the boundary
> - * of a physical package
> - */
> - (1 << 0),
> - 0, socket, NULL, 0);
> - }
> + /*
> + * This works with the assumption that cpus[n].props.*_id has been
> + * sorted from top to down levels in mc->possible_cpu_arch_ids().
> + * Otherwise, the unexpected and duplicate containers will be
> + * created.
> + */
> + for (n = 0; n < cpus->len; n++) {
> + if (cpus->cpus[n].props.socket_id != socket_id) {
> + assert(cpus->cpus[n].props.socket_id > socket_id);
> + socket_id = cpus->cpus[n].props.socket_id;
> + cluster_id = -1;
> + core_id = -1;
> + socket_offset = table_data->len - pptt_start;
> + build_processor_hierarchy_node(table_data,
> + (1 << 0), /* Physical package */
> + 0, socket_id, NULL, 0);
> + }
>
> - if (mc->smp_props.clusters_supported) {
> - length = g_queue_get_length(list);
> - for (i = 0; i < length; i++) {
> - int cluster;
> -
> - parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> - for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
> - g_queue_push_tail(list,
> - GUINT_TO_POINTER(table_data->len - pptt_start));
> - build_processor_hierarchy_node(
> - table_data,
> - (0 << 0), /* not a physical package */
> - parent_offset, cluster, NULL, 0);
> + if (mc->smp_props.clusters_supported) {
> + if (cpus->cpus[n].props.cluster_id != cluster_id) {
> + assert(cpus->cpus[n].props.cluster_id > cluster_id);
> + cluster_id = cpus->cpus[n].props.cluster_id;
> + core_id = -1;
> + cluster_offset = table_data->len - pptt_start;
> + build_processor_hierarchy_node(table_data,
> + (0 << 0), /* Not a physical package */
> + socket_offset, cluster_id, NULL, 0);
> }
> + } else {
> + cluster_offset = socket_offset;
> }
> - }
>
> - length = g_queue_get_length(list);
> - for (i = 0; i < length; i++) {
> - int core;
> -
> - parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> - for (core = 0; core < ms->smp.cores; core++) {
> - if (ms->smp.threads > 1) {
> - g_queue_push_tail(list,
> - GUINT_TO_POINTER(table_data->len - pptt_start));
> - build_processor_hierarchy_node(
> - table_data,
> + if (ms->smp.threads == 1) {
> + build_processor_hierarchy_node(table_data,
> + (1 << 1) | /* ACPI Processor ID valid */
> + (1 << 3), /* Node is a Leaf */
> + cluster_offset, n, NULL, 0);
> + } else {
> + if (cpus->cpus[n].props.core_id != core_id) {
> + assert(cpus->cpus[n].props.core_id > core_id);
> + core_id = cpus->cpus[n].props.core_id;
> + core_offset = table_data->len - pptt_start;
> + build_processor_hierarchy_node(table_data,
> (0 << 0), /* not a physical package */
> - parent_offset, core, NULL, 0);
> - } else {
> - build_processor_hierarchy_node(
> - table_data,
> - (1 << 1) | /* ACPI Processor ID valid */
> - (1 << 3), /* Node is a Leaf */
> - parent_offset, uid++, NULL, 0);
> + cluster_offset, core_id, NULL, 0);
> }
> - }
> - }
> -
> - length = g_queue_get_length(list);
> - for (i = 0; i < length; i++) {
> - int thread;
>
> - parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> - for (thread = 0; thread < ms->smp.threads; thread++) {
> - build_processor_hierarchy_node(
> - table_data,
> + build_processor_hierarchy_node(table_data,
> (1 << 1) | /* ACPI Processor ID valid */
> (1 << 2) | /* Processor is a Thread */
> (1 << 3), /* Node is a Leaf */
> - parent_offset, uid++, NULL, 0);
> + core_offset, n, NULL, 0);
> }
> }
>
> - g_queue_free(list);
> acpi_table_end(linker, &table);
> }
>
next prev parent reply other threads:[~2022-04-20 15:02 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-20 10:49 [PATCH v7 0/4] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-04-20 10:49 ` [PATCH v7 1/4] qapi/machine.json: Add cluster-id Gavin Shan
2022-04-21 11:51 ` wangyanan (Y) via
2022-04-22 11:19 ` Gavin Shan
2022-04-20 10:49 ` [PATCH v7 2/4] hw/arm/virt: Consider SMP configuration in CPU topology Gavin Shan
2022-04-21 11:50 ` wangyanan (Y) via
2022-04-22 11:24 ` Gavin Shan
2022-04-20 10:49 ` [PATCH v7 3/4] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-04-20 10:49 ` [PATCH v7 4/4] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
2022-04-20 14:56 ` Igor Mammedov [this message]
2022-04-21 11:30 ` Gavin Shan
2022-04-21 11:50 ` wangyanan (Y) via
2022-04-22 11:25 ` Gavin Shan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220420165655.5e83b381@redhat.com \
--to=imammedo@redhat.com \
--cc=Jonathan.Cameron@Huawei.com \
--cc=ani@anisinha.ca \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=drjones@redhat.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=f4bug@amsat.org \
--cc=gshan@redhat.com \
--cc=lvivier@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=shan.gavin@gmail.com \
--cc=thuth@redhat.com \
--cc=wangyanan55@huawei.com \
--cc=zhenyzha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).