From: Daniel Henrique Barboza <danielhb413@gmail.com>
To: qemu-devel@nongnu.org
Cc: clg@kaod.org, Daniel Henrique Barboza <danielhb413@gmail.com>,
qemu-ppc@nongnu.org, groug@kaod.org, david@gibson.dropbear.id.au
Subject: [PATCH 3/3] spapr_numa.c: fix ibm, max-associativity-domains calculation
Date: Thu, 28 Jan 2021 12:17:31 -0300 [thread overview]
Message-ID: <20210128151731.1333664-4-danielhb413@gmail.com> (raw)
In-Reply-To: <20210128151731.1333664-1-danielhb413@gmail.com>
The current logic for calculating 'maxdomain' making it a sum of
numa_state->num_nodes with spapr->gpu_numa_id. spapr->gpu_numa_id is
used as a index to determine the next available NUMA id that a
given NVGPU can use.
The problem is that the initial value of gpu_numa_id, for any topology
that has more than one NUMA node, is equal to numa_state->num_nodes.
This means that our maxdomain will always be, at least, twice the
amount of existing NUMA nodes. This means that a guest with 4 NUMA
nodes will end up with the following max-associativity-domains:
rtas/ibm,max-associativity-domains
00000004 00000008 00000008 00000008 00000008
This overtuning of maxdomains doesn't go unnoticed in the guest, being
detected in SLUB during boot:
dmesg | grep SLUB
[ 0.000000] SLUB: HWalign=128, Order=0-3, MinObjects=0, CPUs=4, Nodes=8
SLUB is detecting 8 total nodes, with 4 nodes being online.
This patch fixes ibm,max-associativity-domains by considering the amount
of NVGPUs NUMA nodes presented in the guest, instead of
spapr->gpu_numa_id.
Reported-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
hw/ppc/spapr_numa.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index f71105c783..f4d6abce87 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -60,6 +60,19 @@ unsigned int spapr_numa_initial_nvgpu_NUMA_id(MachineState *machine)
return MAX(1, machine->numa_state->num_nodes);
}
+/*
+ * Note: if called before spapr_phb_pci_collect_nvgpu() finishes collecting
+ * all NVGPUs, this function will not give the right number of NVGPUs NUMA
+ * nodes.
+ */
+static
+unsigned int spapr_numa_get_number_nvgpus_nodes(SpaprMachineState *spapr)
+{
+ MachineState *ms = MACHINE(spapr);
+
+ return spapr->gpu_numa_id - spapr_numa_initial_nvgpu_NUMA_id(ms);
+}
+
/*
* This function will translate the user distances into
* what the kernel understand as possible values: 10
@@ -311,6 +324,7 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
{
MachineState *ms = MACHINE(spapr);
SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
+ uint32_t number_nvgpus_nodes = spapr_numa_get_number_nvgpus_nodes(spapr);
uint32_t refpoints[] = {
cpu_to_be32(0x4),
cpu_to_be32(0x3),
@@ -318,7 +332,7 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
cpu_to_be32(0x1),
};
uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
- uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
+ uint32_t maxdomain = ms->numa_state->num_nodes + number_nvgpus_nodes;
uint32_t maxdomains[] = {
cpu_to_be32(4),
cpu_to_be32(maxdomain),
--
2.26.2
next prev parent reply other threads:[~2021-01-28 15:22 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-28 15:17 [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains Daniel Henrique Barboza
2021-01-28 15:17 ` [PATCH 1/3] spapr: move spapr_machine_using_legacy_numa() to spapr_numa.c Daniel Henrique Barboza
2021-01-28 16:03 ` Greg Kurz
2021-01-28 23:56 ` David Gibson
2021-01-28 15:17 ` [PATCH 2/3] spapr_numa.c: create spapr_numa_initial_nvgpu_NUMA_id() helper Daniel Henrique Barboza
2021-01-28 15:50 ` Greg Kurz
2021-01-28 15:17 ` Daniel Henrique Barboza [this message]
2021-01-28 16:21 ` [PATCH 3/3] spapr_numa.c: fix ibm,max-associativity-domains calculation Greg Kurz
2021-01-28 16:03 ` [PATCH 0/3] spapr, spapr_numa: fix max-associativity-domains Greg Kurz
2021-01-28 17:05 ` Daniel Henrique Barboza
2021-01-28 17:13 ` Cédric Le Goater
2021-01-28 17:20 ` Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210128151731.1333664-4-danielhb413@gmail.com \
--to=danielhb413@gmail.com \
--cc=clg@kaod.org \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).