qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kurz <groug@kaod.org>
To: Daniel Henrique Barboza <danielhb413@gmail.com>
Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, david@gibson.dropbear.id.au
Subject: Re: [PATCH v4 3/3] spapr_numa: use spapr_numa_get_vcpu_assoc() in home_node hcall
Date: Fri, 4 Sep 2020 12:33:02 +0200	[thread overview]
Message-ID: <20200904123302.21bd6ed3@bahia.lan> (raw)
In-Reply-To: <20200904010439.581957-4-danielhb413@gmail.com>

On Thu,  3 Sep 2020 22:04:39 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> The current implementation of h_home_node_associativity hard codes
> the values of associativity domains of the vcpus. Let's make
> it consider the values already initialized in spapr->numa_assoc_array,
> via the spapr_numa_get_vcpu_assoc() helper.
> 
> We want to set it and forget it, and for that we also need to
> assert that we don't overflow the registers of the hypercall.
> From R4 to R9 we can squeeze in 12 associativity domains, so
> let's assert that MAX_DISTANCE_REF_POINTS isn't greater
> than that.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 33 +++++++++++++++++++++++++--------
>  1 file changed, 25 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 980a6488bf..0a7e07fe60 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -181,10 +181,12 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>                                                target_ulong opcode,
>                                                target_ulong *args)
>  {
> +    g_autofree uint32_t *vcpu_assoc = NULL;
>      target_ulong flags = args[0];
>      target_ulong procno = args[1];
>      PowerPCCPU *tcpu;
> -    int idx;
> +    uint vcpu_assoc_size;
> +    int idx, assoc_idx;
>  
>      /* only support procno from H_REGISTER_VPA */
>      if (flags != 0x1) {
> @@ -196,16 +198,31 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>          return H_P2;
>      }
>  
> -    /* sequence is the same as in the "ibm,associativity" property */
> +    /*
> +     * Given that we want to be flexible with the sizes and indexes,
> +     * we must consider that there is a hard limit of how many
> +     * associativities domain we can fit in R4 up to R9, which
> +     * would be 12. Assert and bail if that's not the case.
> +     */
> +    G_STATIC_ASSERT(MAX_DISTANCE_REF_POINTS <= 12);
> +
> +    vcpu_assoc = spapr_numa_get_vcpu_assoc(spapr, tcpu, &vcpu_assoc_size);
> +    vcpu_assoc_size /= sizeof(uint32_t);

Using vcpu_assoc_size both as a size-in-bytes and a number of elements in
the array is gross... Anyway since this should go away if you introduce
a macro as suggested in the previous patch.

> +    /* assoc_idx starts at 1 to skip associativity size */
> +    assoc_idx = 1;
>  
> -    idx = 0;
>  #define ASSOCIATIVITY(a, b) (((uint64_t)(a) << 32) | \
>                               ((uint64_t)(b) & 0xffffffff))
> -    args[idx++] = ASSOCIATIVITY(0, 0);
> -    args[idx++] = ASSOCIATIVITY(0, tcpu->node_id);
> -    args[idx++] = ASSOCIATIVITY(procno, -1);
> -    for ( ; idx < 6; idx++) {
> -        args[idx] = -1;
> +
> +    for (idx = 0; idx < 6; idx++) {
> +        int32_t a, b;
> +
> +        a = assoc_idx < vcpu_assoc_size ?
> +            be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1;
> +        b = assoc_idx < vcpu_assoc_size ?
> +            be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1;
> +
> +        args[idx] = ASSOCIATIVITY(a, b);
>      }

Ouch this change is really giving me a headache... I understand that
tcpu->node_id and procno are now being read from vcpu_assoc[] but
it's hard to check what vcpu_assoc[assoc_idx++] points to, especially
with the ternary operator... Honestly, I'd rather keep that loop
unrolled with comments telling what's being read.

>  #undef ASSOCIATIVITY
>  



      reply	other threads:[~2020-09-04 10:33 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-04  1:04 [PATCH v4 0/3] pseries NUMA distance rework Daniel Henrique Barboza
2020-09-04  1:04 ` [PATCH v4 1/3] spapr: move h_home_node_associativity to spapr_numa.c Daniel Henrique Barboza
2020-09-04  4:09   ` David Gibson
2020-09-04  1:04 ` [PATCH v4 2/3] spapr_numa: create a vcpu associativity helper Daniel Henrique Barboza
2020-09-04  4:10   ` David Gibson
2020-09-04  9:14     ` Daniel Henrique Barboza
2020-09-04 10:02   ` Greg Kurz
2020-09-04 10:19     ` Daniel Henrique Barboza
2020-09-04  1:04 ` [PATCH v4 3/3] spapr_numa: use spapr_numa_get_vcpu_assoc() in home_node hcall Daniel Henrique Barboza
2020-09-04 10:33   ` Greg Kurz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200904123302.21bd6ed3@bahia.lan \
    --to=groug@kaod.org \
    --cc=danielhb413@gmail.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).