qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Tao Xu <tao3.xu@intel.com>
To: "ehabkost@redhat.com" <ehabkost@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>,
	"qemu-ppc@nongnu.org" <qemu-ppc@nongnu.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"david@gibson.dropbear.id.au" <david@gibson.dropbear.id.au>
Subject: Re: [PATCH v2] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node
Date: Tue, 24 Sep 2019 11:34:40 +0800	[thread overview]
Message-ID: <e84bd22f-fce4-ea39-38a0-0933ff30237b@intel.com> (raw)
In-Reply-To: <20190905083238.1799-1-tao3.xu@intel.com>

Hi Eduardo,

How about this version of patch? Last month this patch is reverted from 
pull request. And I submitted this version.

Tao

On 9/5/2019 4:32 PM, Xu, Tao3 wrote:
> Add MachineClass::auto_enable_numa field. When it is true, a NUMA node
> is expected to be created implicitly.
> 
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
> Suggested-by: Igor Mammedov <imammedo@redhat.com>
> Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
> Reviewed-by: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: Tao Xu <tao3.xu@intel.com>
> ---
> 
> Note: Parameter -numa node,mem is deprecated too. So I set
> "numa_info[0].node_mem = ram_size" instead of
> "NumaNodeOptions node = { .mem = ram_size }".
> 
> Changes in v2:
>      - Fix the qtest error, avoid using numa_auto_assign_ram.
> ---
>   hw/core/numa.c      | 10 ++++++++--
>   hw/ppc/spapr.c      |  9 +--------
>   include/hw/boards.h |  1 +
>   3 files changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/core/numa.c b/hw/core/numa.c
> index 4dfec5c95b..038c96d4ab 100644
> --- a/hw/core/numa.c
> +++ b/hw/core/numa.c
> @@ -378,11 +378,17 @@ void numa_complete_configuration(MachineState *ms)
>        *   guest tries to use it with that drivers.
>        *
>        * Enable NUMA implicitly by adding a new NUMA node automatically.
> +     *
> +     * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
> +     * assume there is just one node with whole RAM.
>        */
> -    if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
> -        mc->auto_enable_numa_with_memhp) {
> +    if (ms->numa_state->num_nodes == 0 &&
> +        ((ms->ram_slots > 0 &&
> +        mc->auto_enable_numa_with_memhp) ||
> +        mc->auto_enable_numa)) {
>               NumaNodeOptions node = { };
>               parse_numa_node(ms, &node, &error_abort);
> +            numa_info[0].node_mem = ram_size;
>       }
>   
>       assert(max_numa_nodeid <= MAX_NODES);
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 222a325056..f760e0f5d7 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -405,14 +405,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, void *fdt)
>       hwaddr mem_start, node_size;
>       int i, nb_nodes = machine->numa_state->num_nodes;
>       NodeInfo *nodes = machine->numa_state->nodes;
> -    NodeInfo ramnode;
> -
> -    /* No NUMA nodes, assume there is just one node with whole RAM */
> -    if (!nb_nodes) {
> -        nb_nodes = 1;
> -        ramnode.node_mem = machine->ram_size;
> -        nodes = &ramnode;
> -    }
>   
>       for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
>           if (!nodes[i].node_mem) {
> @@ -4477,6 +4469,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
>        */
>       mc->numa_mem_align_shift = 28;
>       mc->numa_mem_supported = true;
> +    mc->auto_enable_numa = true;
>   
>       smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
>       smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 2289536e48..481e69388e 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -221,6 +221,7 @@ struct MachineClass {
>       bool smbus_no_migration_support;
>       bool nvdimm_supported;
>       bool numa_mem_supported;
> +    bool auto_enable_numa;
>   
>       HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
>                                              DeviceState *dev);
> 



  parent reply	other threads:[~2019-09-24  3:36 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-05  8:32 [Qemu-devel] [PATCH v2] numa: Introduce MachineClass::auto_enable_numa for implicit NUMA node Tao Xu
2019-09-16  2:03 ` Tao Xu
2019-09-24  3:34 ` Tao Xu [this message]
2019-09-26 17:16   ` Eduardo Habkost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e84bd22f-fce4-ea39-38a0-0933ff30237b@intel.com \
    --to=tao3.xu@intel.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=ehabkost@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).