From: Daniel Henrique Barboza <danielhb413@gmail.com>
To: Greg Kurz <groug@kaod.org>
Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, david@gibson.dropbear.id.au
Subject: Re: [PATCH v6 3/6] spapr: introduce spapr_numa_associativity_reset()
Date: Wed, 15 Sep 2021 22:32:13 -0300 [thread overview]
Message-ID: <0dc516f6-8504-6d65-93f7-c8cd0de3afa2@gmail.com> (raw)
In-Reply-To: <3bd59a2f-5c3b-f062-4a6c-abf34340000d@gmail.com>
Greg,
On 9/14/21 16:58, Daniel Henrique Barboza wrote:
>
>
> On 9/14/21 08:55, Greg Kurz wrote:
>> On Fri, 10 Sep 2021 16:55:36 -0300
>> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
>>
[...]
>>> }
>>> @@ -167,6 +167,11 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>>> int nb_numa_nodes = machine->numa_state->num_nodes;
>>> int i, j, max_nodes_with_gpus;
>>> + /* init FORM1_assoc_array */
>>> + for (i = 0; i < MAX_NODES + NVGPU_MAX_NUM; i++) {
>>> + spapr->FORM1_assoc_array[i] = g_new0(uint32_t, NUMA_ASSOC_SIZE);
>>
>> Why dynamic allocation since you have fixed size ?
>
> If I use static allocation the C compiler complains that I can't assign a
> uint32_t** pointer to a uint32_t[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE]
> array type.
>
> And given that the FORM2 array is a [MAX_NODES + NVGPU_MAX_NUM][2] array, the
> way I worked around that here is to use dynamic allocation. Then C considers valid
> to use numa_assoc_array as an uint32_t** pointer for both FORM1 and FORM2
> 2D arrays. I'm fairly certain that there might be a way of doing static allocation
> and keeping the uint32_t** pointer as is, but didn't find any. Tips welcome :D
>
> An alternative that I considered, without the need for this dynamic allocation hack,
> is to make both FORM1 and FORM2 data structures the same size (i.e.
> an [MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE] uint32_t array) and then numa_assoc_array
> can be a pointer of the same array type for both. Since we're controlling FORM1 and
> FORM2 sizes separately inside the functions this would work. The downside is that
> FORM2 2D array would be bigger than necessary.
I took a look and didn't find any neat way of doing a pointer switch
between 2 static arrays without either allocating dynamic mem on the
pointer and then g_memdup'ing it, or make all arrays the same size
(i.e. numa_assoc_array would also be a statically allocated array
with FORM1 size) and then memcpy() the chosen mode.
I just posted a new version in which I'm not relying on 'numa_assoc_array'.
Instead, the DT functions will use a get_associativity() to retrieve
the current associativity array based on CAS choice, in a similar
manner like it is already done with the array sizes. This also allowed
us to get rid of associativity_reset().
Thanks,
Daniel
>
>
> I don't have strong opinions about which way to do it, so I'm all ears.
>
>
> Thanks,
>
>
> Daniel
>
>
>
>>
>>> + }
>>> +
>>> /*
>>> * For all associativity arrays: first position is the size,
>>> * position MAX_DISTANCE_REF_POINTS is always the numa_id,
>>> @@ -177,8 +182,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>>> * 'i' will be a valid node_id set by the user.
>>> */
>>> for (i = 0; i < nb_numa_nodes; i++) {
>>> - spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>> - spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>>> + spapr->FORM1_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>> + spapr->FORM1_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>>> }
>>> /*
>>> @@ -192,15 +197,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>>> max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
>>> for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
>>> - spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>> + spapr->FORM1_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>> for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
>>> uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
>>> SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
>>> - spapr->numa_assoc_array[i][j] = gpu_assoc;
>>> + spapr->FORM1_assoc_array[i][j] = gpu_assoc;
>>> }
>>> - spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>>> + spapr->FORM1_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>>> }
>>> /*
>>> @@ -227,14 +232,33 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>> MachineState *machine)
>>> {
>>> spapr_numa_FORM1_affinity_init(spapr, machine);
>>> +
>>> + /*
>>> + * Default to FORM1 affinity until CAS. We'll call affinity_reset()
>>> + * during CAS when we're sure about which NUMA affinity the guest
>>> + * is going to use.
>>> + *
>>> + * This step is a failsafe - guests in the wild were able to read
>>> + * FORM1 affinity info before CAS for a long time. Since affinity_reset()
>>> + * is just a pointer switch between data that was already populated
>>> + * here, this is an acceptable overhead to be on the safer side.
>>> + */
>>> + spapr->numa_assoc_array = spapr->FORM1_assoc_array;
>>
>> The right way to do that is to call spapr_numa_associativity_reset() from
>> spapr_machine_reset() because we want to revert to FORM1 each time the
>> guest is rebooted.
>>
>>> +}
>>> +
>>> +void spapr_numa_associativity_reset(SpaprMachineState *spapr)
>>> +{
>>> + /* No FORM2 affinity implemented yet */
>>
>> This seems quite obvious at this point, not sure the comment helps.
>>
>>> + spapr->numa_assoc_array = spapr->FORM1_assoc_array;
>>> }
>>> void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>>> int offset, int nodeid)
>>> {
>>> + /* Hardcode the size of FORM1 associativity array for now */
>>> _FDT((fdt_setprop(fdt, offset, "ibm,associativity",
>>> spapr->numa_assoc_array[nodeid],
>>> - sizeof(spapr->numa_assoc_array[nodeid]))));
>>> + NUMA_ASSOC_SIZE * sizeof(uint32_t))));
>>
>> Rather than doing this temporary change that gets undone in
>> a later patch, I suggest you introduce get_numa_assoc_size()
>> in a preliminary patch and use it here already :
>>
>> - NUMA_ASSOC_SIZE * sizeof(uint32_t))));
>> + get_numa_assoc_size(spapr) * sizeof(uint32_t))));
>>
>> This will simplify the reviewing.
>>
>>> }
>>> static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>>> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
>>> index 637652ad16..8a9490f0bf 100644
>>> --- a/include/hw/ppc/spapr.h
>>> +++ b/include/hw/ppc/spapr.h
>>> @@ -249,7 +249,8 @@ struct SpaprMachineState {
>>> unsigned gpu_numa_id;
>>> SpaprTpmProxy *tpm_proxy;
>>> - uint32_t numa_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE];
>>> + uint32_t *FORM1_assoc_array[MAX_NODES + NVGPU_MAX_NUM];
>>
>> As said above, I really don't see the point in not having :
>>
>> uint32_t *FORM1_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE];
>>
>>> + uint32_t **numa_assoc_array;
>>> Error *fwnmi_migration_blocker;
>>> };
>>> diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
>>> index 6f9f02d3de..ccf3e4eae8 100644
>>> --- a/include/hw/ppc/spapr_numa.h
>>> +++ b/include/hw/ppc/spapr_numa.h
>>> @@ -24,6 +24,7 @@
>>> */
>>> void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>> MachineState *machine);
>>> +void spapr_numa_associativity_reset(SpaprMachineState *spapr);
>>> void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas);
>>> void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>>> int offset, int nodeid);
>>
next prev parent reply other threads:[~2021-09-16 1:34 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-10 19:55 [PATCH v6 0/6] pSeries FORM2 affinity support Daniel Henrique Barboza
2021-09-10 19:55 ` [PATCH v6 1/6] spapr_numa.c: split FORM1 code into helpers Daniel Henrique Barboza
2021-09-14 8:23 ` Greg Kurz
2021-09-10 19:55 ` [PATCH v6 2/6] spapr_numa.c: scrap 'legacy_numa' concept Daniel Henrique Barboza
2021-09-14 8:34 ` Greg Kurz
2021-09-10 19:55 ` [PATCH v6 3/6] spapr: introduce spapr_numa_associativity_reset() Daniel Henrique Barboza
2021-09-14 11:55 ` Greg Kurz
2021-09-14 19:58 ` Daniel Henrique Barboza
2021-09-16 1:32 ` Daniel Henrique Barboza [this message]
2021-09-16 17:31 ` Greg Kurz
2021-09-10 19:55 ` [PATCH v6 4/6] spapr_numa.c: parametrize FORM1 macros Daniel Henrique Barboza
2021-09-14 12:10 ` Greg Kurz
2021-09-10 19:55 ` [PATCH v6 5/6] spapr: move FORM1 verifications to post CAS Daniel Henrique Barboza
2021-09-14 12:26 ` Greg Kurz
2021-09-10 19:55 ` [PATCH v6 6/6] spapr_numa.c: FORM2 NUMA affinity support Daniel Henrique Barboza
2021-09-14 12:58 ` Greg Kurz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0dc516f6-8504-6d65-93f7-c8cd0de3afa2@gmail.com \
--to=danielhb413@gmail.com \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).