qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kurz <groug@kaod.org>
To: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: clg@kaod.org, qemu-ppc@nongnu.org, qemu-devel@nongnu.org,
	david@gibson.dropbear.id.au
Subject: Re: [PATCH 3/3] spapr_numa.c: fix ibm,max-associativity-domains calculation
Date: Thu, 28 Jan 2021 17:21:43 +0100	[thread overview]
Message-ID: <20210128172143.5cf56101@bahia.lan> (raw)
In-Reply-To: <20210128151731.1333664-4-danielhb413@gmail.com>

On Thu, 28 Jan 2021 12:17:31 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> The current logic for calculating 'maxdomain' making it a sum of
> numa_state->num_nodes with spapr->gpu_numa_id. spapr->gpu_numa_id is
> used as a index to determine the next available NUMA id that a
> given NVGPU can use.
> 
> The problem is that the initial value of gpu_numa_id, for any topology
> that has more than one NUMA node, is equal to numa_state->num_nodes.
> This means that our maxdomain will always be, at least, twice the
> amount of existing NUMA nodes. This means that a guest with 4 NUMA
> nodes will end up with the following max-associativity-domains:
> 
> rtas/ibm,max-associativity-domains
>                  00000004 00000008 00000008 00000008 00000008
> 
> This overtuning of maxdomains doesn't go unnoticed in the guest, being
> detected in SLUB during boot:
> 
>  dmesg | grep SLUB
> [    0.000000] SLUB: HWalign=128, Order=0-3, MinObjects=0, CPUs=4, Nodes=8
> 
> SLUB is detecting 8 total nodes, with 4 nodes being online.
> 
> This patch fixes ibm,max-associativity-domains by considering the amount
> of NVGPUs NUMA nodes presented in the guest, instead of
> spapr->gpu_numa_id.
> 
> Reported-by: Cédric Le Goater <clg@kaod.org>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index f71105c783..f4d6abce87 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -60,6 +60,19 @@ unsigned int spapr_numa_initial_nvgpu_NUMA_id(MachineState *machine)
>      return MAX(1, machine->numa_state->num_nodes);
>  }
>  
> +/*
> + * Note: if called before spapr_phb_pci_collect_nvgpu() finishes collecting
> + * all NVGPUs, this function will not give the right number of NVGPUs NUMA
> + * nodes.
> + */

This helper has exactly one user : spapr_numa_write_rtas_dt(). Maybe just
open-code it there, with a comment that spapr->gpu_numa_id is assumed to
be correct at the time we populate the device tree ?

> +static
> +unsigned int spapr_numa_get_number_nvgpus_nodes(SpaprMachineState *spapr)
> +{
> +    MachineState *ms = MACHINE(spapr);
> +
> +    return spapr->gpu_numa_id - spapr_numa_initial_nvgpu_NUMA_id(ms);
> +}
> +
>  /*
>   * This function will translate the user distances into
>   * what the kernel understand as possible values: 10
> @@ -311,6 +324,7 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
>  {
>      MachineState *ms = MACHINE(spapr);
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> +    uint32_t number_nvgpus_nodes = spapr_numa_get_number_nvgpus_nodes(spapr);
>      uint32_t refpoints[] = {
>          cpu_to_be32(0x4),
>          cpu_to_be32(0x3),
> @@ -318,7 +332,7 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
>          cpu_to_be32(0x1),
>      };
>      uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
> -    uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
> +    uint32_t maxdomain = ms->numa_state->num_nodes + number_nvgpus_nodes;
>      uint32_t maxdomains[] = {
>          cpu_to_be32(4),
>          cpu_to_be32(maxdomain),



  reply	other threads:[~2021-01-28 16:26 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-28 15:17 [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains Daniel Henrique Barboza
2021-01-28 15:17 ` [PATCH 1/3] spapr: move spapr_machine_using_legacy_numa() to spapr_numa.c Daniel Henrique Barboza
2021-01-28 16:03   ` Greg Kurz
2021-01-28 23:56   ` David Gibson
2021-01-28 15:17 ` [PATCH 2/3] spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id() helper Daniel Henrique Barboza
2021-01-28 15:50   ` Greg Kurz
2021-01-28 15:17 ` [PATCH 3/3] spapr_numa.c: fix ibm, max-associativity-domains calculation Daniel Henrique Barboza
2021-01-28 16:21   ` Greg Kurz [this message]
2021-01-28 16:03 ` [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains Greg Kurz
2021-01-28 17:05   ` Daniel Henrique Barboza
2021-01-28 17:13     ` Cédric Le Goater
2021-01-28 17:20       ` Cédric Le Goater

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210128172143.5cf56101@bahia.lan \
    --to=groug@kaod.org \
    --cc=clg@kaod.org \
    --cc=danielhb413@gmail.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).