public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Vasilis Liaskovitis <vasilis.liaskovitis@profitbricks.com>
To: kvm@vger.kernel.org
Cc: aliguori@us.ibm.com, avi@redhat.com, andre.przywara@amd.com,
	seabios@seabios.org,
	Vasilis Liaskovitis <vasilis.liaskovitis@profitbricks.com>
Subject: [PATCH] Set numa topology for max_cpus
Date: Wed, 26 Oct 2011 14:19:00 +0200	[thread overview]
Message-ID: <1319631540-32611-1-git-send-email-vasilis.liaskovitis@profitbricks.com> (raw)

qemu-kvm passes numa/SRAT topology information for smp_cpus to SeaBIOS. However
SeaBIOS always expects to setup max_cpus number of SRAT cpu entries
(MaxCountCPUs variable in build_srat function of Seabios). When qemu-kvm runs
with smp_cpus != max_cpus (e.g. -smp 2,maxcpus=4), Seabios will mistakenly use
memory SRAT info for setting up CPU SRAT entries for the offline CPUs. Wrong
SRAT memory entries are also created. This breaks NUMA in a guest.
Fix by setting up SRAT info for max_cpus in qemu-kvm.

Signed-off-by: Vasilis Liaskovitis <vasilis.liaskovitis@profitbricks.com>
---
 hw/pc.c |    8 ++++----
 vl.c    |    2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hw/pc.c b/hw/pc.c
index fb49612..e0a65da 100644
--- a/hw/pc.c
+++ b/hw/pc.c
@@ -626,9 +626,9 @@ static void *bochs_bios_init(void)
      * of nodes, one word for each VCPU->node and one word for each node to
      * hold the amount of memory.
      */
-    numa_fw_cfg = g_malloc0((1 + smp_cpus + nb_numa_nodes) * 8);
+    numa_fw_cfg = g_malloc0((1 + max_cpus + nb_numa_nodes) * 8);
     numa_fw_cfg[0] = cpu_to_le64(nb_numa_nodes);
-    for (i = 0; i < smp_cpus; i++) {
+    for (i = 0; i < max_cpus; i++) {
         for (j = 0; j < nb_numa_nodes; j++) {
             if (node_cpumask[j] & (1 << i)) {
                 numa_fw_cfg[i + 1] = cpu_to_le64(j);
@@ -637,10 +637,10 @@ static void *bochs_bios_init(void)
         }
     }
     for (i = 0; i < nb_numa_nodes; i++) {
-        numa_fw_cfg[smp_cpus + 1 + i] = cpu_to_le64(node_mem[i]);
+        numa_fw_cfg[max_cpus + 1 + i] = cpu_to_le64(node_mem[i]);
     }
     fw_cfg_add_bytes(fw_cfg, FW_CFG_NUMA, (uint8_t *)numa_fw_cfg,
-                     (1 + smp_cpus + nb_numa_nodes) * 8);
+                     (1 + max_cpus + nb_numa_nodes) * 8);
 
     return fw_cfg;
 }
diff --git a/vl.c b/vl.c
index aeed54a..a3ae262 100644
--- a/vl.c
+++ b/vl.c
@@ -3410,7 +3410,7 @@ int main(int argc, char **argv, char **envp)
          * real machines which also use this scheme.
          */
         if (i == nb_numa_nodes) {
-            for (i = 0; i < smp_cpus; i++) {
+            for (i = 0; i < max_cpus; i++) {
                 node_cpumask[i % nb_numa_nodes] |= 1 << i;
             }
         }
-- 
1.7.6.3


             reply	other threads:[~2011-10-26 12:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-10-26 12:19 Vasilis Liaskovitis [this message]
2011-10-28  0:06 ` [PATCH] Set numa topology for max_cpus Kevin O'Connor
2011-11-10 15:31 ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1319631540-32611-1-git-send-email-vasilis.liaskovitis@profitbricks.com \
    --to=vasilis.liaskovitis@profitbricks.com \
    --cc=aliguori@us.ibm.com \
    --cc=andre.przywara@amd.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=seabios@seabios.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox