From: "Liu, Jingqi" <jingqi.liu@intel.com>
To: "imammedo@redhat.com" <imammedo@redhat.com>,
"mst@redhat.com" <mst@redhat.com>,
"marcel.apfelbaum@gmail.com" <marcel.apfelbaum@gmail.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"ehabkost@redhat.com" <ehabkost@redhat.com>,
"richard.henderson@linaro.org" <richard.henderson@linaro.org>
Cc: "Philippe Mathieu-Daudé" <philmd@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [PATCH v3] hw/i386/acpi-build: Get NUMA information from struct NumaState
Date: Fri, 3 Sep 2021 14:42:19 +0800 [thread overview]
Message-ID: <4793c8f3-31ba-9aa6-3ffd-db2ff4c1ea26@intel.com> (raw)
In-Reply-To: <20210823011254.28506-1-jingqi.liu@intel.com>
Hi Igor,
Any comments ?
Thanks,
Jingqi
On 8/23/2021 9:12 AM, Liu, Jingqi wrote:
> Since commits aa57020774b ("numa: move numa global variable
> nb_numa_nodes into MachineState") and 7e721e7b10e ("numa: move
> numa global variable numa_info into MachineState"), we can get
> NUMA information completely from MachineState::numa_state.
>
> Remove PCMachineState::numa_nodes and PCMachineState::node_mem,
> since they are just copied from MachineState::numa_state.
>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> Signed-off-by: Jingqi Liu <jingqi.liu@intel.com>
> ---
> hw/i386/acpi-build.c | 12 +++++++-----
> hw/i386/pc.c | 9 ---------
> include/hw/i386/pc.h | 4 ----
> 3 files changed, 7 insertions(+), 18 deletions(-)
>
> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
> index 17836149fe..e3c9ad011e 100644
> --- a/hw/i386/acpi-build.c
> +++ b/hw/i386/acpi-build.c
> @@ -1902,6 +1902,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
> X86MachineState *x86ms = X86_MACHINE(machine);
> const CPUArchIdList *apic_ids = mc->possible_cpu_arch_ids(machine);
> PCMachineState *pcms = PC_MACHINE(machine);
> + int nb_numa_nodes = machine->numa_state->num_nodes;
> + NodeInfo *numa_info = machine->numa_state->nodes;
> ram_addr_t hotplugabble_address_space_size =
> object_property_get_int(OBJECT(pcms), PC_MACHINE_DEVMEM_REGION_SIZE,
> NULL);
> @@ -1945,9 +1947,9 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
> next_base = 0;
> numa_start = table_data->len;
>
> - for (i = 1; i < pcms->numa_nodes + 1; ++i) {
> + for (i = 1; i < nb_numa_nodes + 1; ++i) {
> mem_base = next_base;
> - mem_len = pcms->node_mem[i - 1];
> + mem_len = numa_info[i - 1].node_mem;
> next_base = mem_base + mem_len;
>
> /* Cut out the 640K hole */
> @@ -1995,7 +1997,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
> }
>
> slots = (table_data->len - numa_start) / sizeof *numamem;
> - for (; slots < pcms->numa_nodes + 2; slots++) {
> + for (; slots < nb_numa_nodes + 2; slots++) {
> numamem = acpi_data_push(table_data, sizeof *numamem);
> build_srat_memory(numamem, 0, 0, 0, MEM_AFFINITY_NOFLAGS);
> }
> @@ -2011,7 +2013,7 @@ build_srat(GArray *table_data, BIOSLinker *linker, MachineState *machine)
> if (hotplugabble_address_space_size) {
> numamem = acpi_data_push(table_data, sizeof *numamem);
> build_srat_memory(numamem, machine->device_memory->base,
> - hotplugabble_address_space_size, pcms->numa_nodes - 1,
> + hotplugabble_address_space_size, nb_numa_nodes - 1,
> MEM_AFFINITY_HOTPLUGGABLE | MEM_AFFINITY_ENABLED);
> }
>
> @@ -2513,7 +2515,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine)
> }
> }
> #endif
> - if (pcms->numa_nodes) {
> + if (machine->numa_state->num_nodes) {
> acpi_add_table(table_offsets, tables_blob);
> build_srat(tables_blob, tables->linker, machine);
> if (machine->numa_state->have_numa_distance) {
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index c2b9d62a35..adbc348488 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -800,18 +800,9 @@ void pc_machine_done(Notifier *notifier, void *data)
>
> void pc_guest_info_init(PCMachineState *pcms)
> {
> - int i;
> - MachineState *ms = MACHINE(pcms);
> X86MachineState *x86ms = X86_MACHINE(pcms);
>
> x86ms->apic_xrupt_override = true;
> - pcms->numa_nodes = ms->numa_state->num_nodes;
> - pcms->node_mem = g_malloc0(pcms->numa_nodes *
> - sizeof *pcms->node_mem);
> - for (i = 0; i < ms->numa_state->num_nodes; i++) {
> - pcms->node_mem[i] = ms->numa_state->nodes[i].node_mem;
> - }
> -
> pcms->machine_done.notify = pc_machine_done;
> qemu_add_machine_init_done_notifier(&pcms->machine_done);
> }
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 88dffe7517..31b334e0a4 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -47,10 +47,6 @@ typedef struct PCMachineState {
> bool default_bus_bypass_iommu;
> uint64_t max_fw_size;
>
> - /* NUMA information: */
> - uint64_t numa_nodes;
> - uint64_t *node_mem;
> -
> /* ACPI Memory hotplug IO base address */
> hwaddr memhp_io_base;
> } PCMachineState;
>
prev parent reply other threads:[~2021-09-03 6:44 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-23 1:12 [PATCH v3] hw/i386/acpi-build: Get NUMA information from struct NumaState Jingqi Liu
2021-09-03 6:42 ` Liu, Jingqi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4793c8f3-31ba-9aa6-3ffd-db2ff4c1ea26@intel.com \
--to=jingqi.liu@intel.com \
--cc=ehabkost@redhat.com \
--cc=imammedo@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).