From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: qemu-devel@nongnu.org
Cc: aik@ozlabs.ru, qemu-ppc@nongnu.org,
Paul Mackerras <paulus@samba.org>, Alexander Graf <agraf@suse.de>
Subject: [Qemu-devel] [PATCH v6 2/2] spapr: limit numa memory regions by ram size
Date: Mon, 25 Nov 2013 14:14:51 +1100 [thread overview]
Message-ID: <1385349291-14974-3-git-send-email-aik@ozlabs.ru> (raw)
In-Reply-To: <1385349291-14974-1-git-send-email-aik@ozlabs.ru>
From: Paul Mackerras <paulus@samba.org>
This makes sure that all NUMA memory blocks reside within RAM or
have zero length.
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
V6: fixed mistype
---
This is a bugfix for:
-m 500
-smp 8,sockets=2,cores=2,threads=2
-numa node,nodeid=0,cpus=0-3,mem=500
-numa node,nodeid=1,cpus=4-7,mem=500
---
hw/ppc/spapr.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 7426518..1239d80 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -526,12 +526,16 @@ static int spapr_populate_memory(sPAPREnvironment *spapr, void *fdt)
cpu_to_be32(0x0), cpu_to_be32(0x0),
cpu_to_be32(0x0)};
char mem_name[32];
- hwaddr node0_size, mem_start;
+ hwaddr node0_size, mem_start, node_size;
uint64_t mem_reg_property[2];
int i, off;
/* memory node(s) */
- node0_size = (nb_numa_nodes > 1) ? node_mem[0] : ram_size;
+ if (nb_numa_nodes > 1 && node_mem[0] < ram_size) {
+ node0_size = node_mem[0];
+ } else {
+ node0_size = ram_size;
+ }
/* RMA */
mem_reg_property[0] = 0;
@@ -563,7 +567,15 @@ static int spapr_populate_memory(sPAPREnvironment *spapr, void *fdt)
mem_start = node0_size;
for (i = 1; i < nb_numa_nodes; i++) {
mem_reg_property[0] = cpu_to_be64(mem_start);
- mem_reg_property[1] = cpu_to_be64(node_mem[i]);
+ if (mem_start >= ram_size) {
+ node_size = 0;
+ } else {
+ node_size = node_mem[i];
+ if (node_size > ram_size - mem_start) {
+ node_size = ram_size - mem_start;
+ }
+ }
+ mem_reg_property[1] = cpu_to_be64(node_size);
associativity[3] = associativity[4] = cpu_to_be32(i);
sprintf(mem_name, "memory@" TARGET_FMT_lx, mem_start);
off = fdt_add_subnode(fdt, 0, mem_name);
@@ -573,7 +585,7 @@ static int spapr_populate_memory(sPAPREnvironment *spapr, void *fdt)
sizeof(mem_reg_property))));
_FDT((fdt_setprop(fdt, off, "ibm,associativity", associativity,
sizeof(associativity))));
- mem_start += node_mem[i];
+ mem_start += node_size;
}
return 0;
--
1.8.4.rc4
next prev parent reply other threads:[~2013-11-25 3:15 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-25 3:14 [Qemu-devel] [PATCH v6 0/2] spapr: rma and numa nodes fixes Alexey Kardashevskiy
2013-11-25 3:14 ` [Qemu-devel] [PATCH v6 1/2] spapr: make sure RMA is in first mode of first memory node Alexey Kardashevskiy
2013-11-25 3:14 ` Alexey Kardashevskiy [this message]
2013-12-03 3:45 ` [Qemu-devel] [PATCH v6 0/2] spapr: rma and numa nodes fixes Alexey Kardashevskiy
2013-12-10 8:32 ` Alexey Kardashevskiy
2013-12-18 21:38 ` Alexander Graf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1385349291-14974-3-git-send-email-aik@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=agraf@suse.de \
--cc=paulus@samba.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).