From: David Gibson <david@gibson.dropbear.id.au>
To: peter.maydell@linaro.org, groug@kaod.org
Cc: danielhb413@gmail.com, qemu-devel@nongnu.org,
Min Deng <mdeng@redhat.com>,
qemu-ppc@nongnu.org, David Gibson <david@gibson.dropbear.id.au>
Subject: [PULL 09/22] spapr: Fix buffer overflow in spapr_numa_associativity_init()
Date: Wed, 6 Jan 2021 14:38:03 +1100 [thread overview]
Message-ID: <20210106033816.232598-10-david@gibson.dropbear.id.au> (raw)
In-Reply-To: <20210106033816.232598-1-david@gibson.dropbear.id.au>
From: Greg Kurz <groug@kaod.org>
Running a guest with 128 NUMA nodes crashes QEMU:
../../util/error.c:59: error_setv: Assertion `*errp == NULL' failed.
The crash happens when setting the FWNMI migration blocker:
2861 if (spapr_get_cap(spapr, SPAPR_CAP_FWNMI) == SPAPR_CAP_ON) {
2862 /* Create the error string for live migration blocker */
2863 error_setg(&spapr->fwnmi_migration_blocker,
2864 "A machine check is being handled during migration. The handler"
2865 "may run and log hardware error on the destination");
2866 }
Inspection reveals that papr->fwnmi_migration_blocker isn't NULL:
(gdb) p spapr->fwnmi_migration_blocker
$1 = (Error *) 0x8000000004000000
Since this is the only place where papr->fwnmi_migration_blocker is
set, this means someone wrote there in our back. Further analysis
points to spapr_numa_associativity_init(), especially the part
that initializes the associative arrays for NVLink GPUs:
max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
ie. max_nodes_with_gpus = 128 + 6, but the array isn't sized to
accommodate the 6 extra nodes:
struct SpaprMachineState {
.
.
.
uint32_t numa_assoc_array[MAX_NODES][NUMA_ASSOC_SIZE];
Error *fwnmi_migration_blocker;
};
and the following loops happily overwrite spapr->fwnmi_migration_blocker,
and probably more:
for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
spapr->numa_assoc_array[i][j] = gpu_assoc;
}
spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
}
Fix the size of the array. This requires "hw/ppc/spapr.h" to see
NVGPU_MAX_NUM. Including "hw/pci-host/spapr.h" introduces a
circular dependency that breaks the build, so this moves the
definition of NVGPU_MAX_NUM to "hw/ppc/spapr.h" instead.
Reported-by: Min Deng <mdeng@redhat.com>
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1908693
Fixes: dd7e1d7ae431 ("spapr_numa: move NVLink2 associativity handling to spapr_numa.c")
Cc: danielhb413@gmail.com
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <160829960428.734871.12634150161215429514.stgit@bahia.lan>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
include/hw/pci-host/spapr.h | 2 --
include/hw/ppc/spapr.h | 5 ++++-
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index 4f58f0223b..bd014823a9 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -115,8 +115,6 @@ struct SpaprPhbState {
#define SPAPR_PCI_NV2RAM64_WIN_BASE SPAPR_PCI_LIMIT
#define SPAPR_PCI_NV2RAM64_WIN_SIZE (2 * TiB) /* For up to 6 GPUs 256GB each */
-/* Max number of these GPUsper a physical box */
-#define NVGPU_MAX_NUM 6
/* Max number of NVLinks per GPU in any physical box */
#define NVGPU_MAX_LINKS 3
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 06a5b4259f..1cc19575f5 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -112,6 +112,9 @@ typedef enum {
#define NUMA_ASSOC_SIZE (MAX_DISTANCE_REF_POINTS + 1)
#define VCPU_ASSOC_SIZE (NUMA_ASSOC_SIZE + 1)
+/* Max number of these GPUsper a physical box */
+#define NVGPU_MAX_NUM 6
+
typedef struct SpaprCapabilities SpaprCapabilities;
struct SpaprCapabilities {
uint8_t caps[SPAPR_CAP_NUM];
@@ -240,7 +243,7 @@ struct SpaprMachineState {
unsigned gpu_numa_id;
SpaprTpmProxy *tpm_proxy;
- uint32_t numa_assoc_array[MAX_NODES][NUMA_ASSOC_SIZE];
+ uint32_t numa_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE];
Error *fwnmi_migration_blocker;
};
--
2.29.2
next prev parent reply other threads:[~2021-01-06 3:43 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-06 3:37 [PULL 00/22] ppc-for-6.0 queue 20210106 David Gibson
2021-01-06 3:37 ` [PULL 01/22] hw/ppc/ppc4xx_devs: Make code style fixes to UIC code David Gibson
2021-01-06 3:37 ` [PULL 02/22] ppc: Convert PPC UIC to a QOM device David Gibson
2021-01-06 3:37 ` [PULL 03/22] hw/ppc/virtex_ml507: Drop use of ppcuic_init() David Gibson
2021-01-06 3:37 ` [PULL 04/22] hw/ppc/ppc440_bamboo: " David Gibson
2021-01-06 3:37 ` [PULL 05/22] spapr: DRC lookup cannot fail David Gibson
2021-01-06 3:38 ` [PULL 06/22] spapr/xive: Make spapr_xive_pic_print_info() static David Gibson
2021-01-06 3:38 ` [PULL 07/22] spapr: Fix DR properties of the root node David Gibson
2021-01-06 3:38 ` [PULL 08/22] spapr: Allow memory unplug to always succeed David Gibson
2021-01-06 3:38 ` David Gibson [this message]
2021-01-06 3:38 ` [PULL 10/22] spapr: Call spapr_drc_reset() for all DRCs at CAS David Gibson
2021-01-06 3:38 ` [PULL 11/22] spapr: Fix reset of transient DR connectors David Gibson
2021-01-06 3:38 ` [PULL 12/22] spapr: Introduce spapr_drc_reset_all() David Gibson
2021-01-06 3:38 ` [PULL 13/22] spapr: Use spapr_drc_reset_all() at machine reset David Gibson
2021-01-06 3:38 ` [PULL 14/22] spapr: Add drc_ prefix to the DRC realize and unrealize functions David Gibson
2021-01-06 3:38 ` [PULL 15/22] ppc: Fix build with --without-default-devices David Gibson
2021-01-06 3:38 ` [PULL 16/22] ppc: Simplify reverse dependencies of POWERNV and PSERIES on XICS and XIVE David Gibson
2021-01-06 3:38 ` [PULL 17/22] pnv: Fix reverse dependency on PCI express root ports David Gibson
2021-01-06 3:38 ` [PULL 18/22] ppc4xx: Move common dependency on serial to common option David Gibson
2021-01-06 3:38 ` [PULL 19/22] sam460ex: Remove FDT_PPC dependency from KConfig David Gibson
2021-01-06 3:38 ` [PULL 20/22] ppc440_pcix: Improve comment for IRQ mapping David Gibson
2021-01-06 3:38 ` [PULL 21/22] ppc440_pcix: Fix register write trace event David Gibson
2021-01-06 3:38 ` [PULL 22/22] ppc440_pcix: Fix up pci config access David Gibson
2021-01-06 6:12 ` [PULL 00/22] ppc-for-6.0 queue 20210106 BALATON Zoltan via
2021-01-06 13:30 ` Peter Maydell
2021-01-06 15:29 ` BALATON Zoltan via
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210106033816.232598-10-david@gibson.dropbear.id.au \
--to=david@gibson.dropbear.id.au \
--cc=danielhb413@gmail.com \
--cc=groug@kaod.org \
--cc=mdeng@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).