From: Andrew Jones <drjones@redhat.com>
To: qemu-devel@nongnu.org
Cc: ehabkost@redhat.com
Subject: [Qemu-devel] [PATCH] vl: rework smp_parse
Date: Thu, 6 Nov 2014 17:09:35 +0100 [thread overview]
Message-ID: <1415290175-17314-1-git-send-email-drjones@redhat.com> (raw)
smp_parse has a couple problems. First, it should use max_cpus,
not smp_cpus when calculating missing topology information.
Conversely, if maxcpus is not input, then the topology should
dictate max_cpus, as the topology may support more than the
input smp_cpus number. Second, smp_parse shouldn't silently
adjust the number of threads a user provides, which it currently
does in order to ensure the topology supports the number
of cpus provided. smp_parse should rather complain and exit
when input isn't consistent. This patch fixes those issues and
attempts to clarify the code a bit too.
I don't believe we need to consider compatibility with the old
behaviors. Specifying something like
-smp 4,sockets=1,cores=1,threads=1
is wrong, even though it currently works (the number of threads
is silently adjusted to 4). And, specifying something like
-smp 4,sockets=1,cores=4,threads=1,maxcpus=8
is also wrong, as there's no way to hotplug the additional 4 cpus
with this topology. So, any users doing these things should be
corrected. The new error message this patch provides should help
them do that.
Below are some examples with before/after results
// topology should support up to max_cpus
< -smp 4,sockets=2,maxcpus=8 sockets=2 cores=2 threads=1 smp_cpus=4 max_cpus=8
> -smp 4,sockets=2,maxcpus=8 sockets=2 cores=4 threads=1 smp_cpus=4 max_cpus=8
// topology supports more than smp_cpus, max_cpus should be higher
< -smp 4,sockets=4,cores=2 sockets=4 cores=2 threads=1 smp_cpus=4 max_cpus=4
> -smp 4,sockets=4,cores=2 sockets=4 cores=2 threads=1 smp_cpus=4 max_cpus=8
// shouldn't silently adjust threads
< -smp 4,sockets=1,cores=2,threads=1 sockets=1 cores=2 threads=2 smp_cpus=4 max_cpus=4
> -smp 4,sockets=1,cores=2,threads=1 "maxcpus must be equal to or greater than smp"
< -smp 4,sockets=2,cores=2,threads=4,maxcpus=8 sockets=2 cores=2 threads=1 smp_cpus=4 max_cpus=8
> -smp 4,sockets=2,cores=2,threads=4,maxcpus=8 "cpu topology: sockets (2) * cores (2) * threads (4) != max_cpus (8)"
Signed-off-by: Andrew Jones <drjones@redhat.com>
---
vl.c | 59 ++++++++++++++++++++++++++++++++++++++---------------------
1 file changed, 38 insertions(+), 21 deletions(-)
diff --git a/vl.c b/vl.c
index f4a6e5e05bce2..23b21a5fbca50 100644
--- a/vl.c
+++ b/vl.c
@@ -1270,35 +1270,52 @@ static QemuOptsList qemu_smp_opts = {
static void smp_parse(QemuOpts *opts)
{
if (opts) {
-
- unsigned cpus = qemu_opt_get_number(opts, "cpus", 0);
unsigned sockets = qemu_opt_get_number(opts, "sockets", 0);
unsigned cores = qemu_opt_get_number(opts, "cores", 0);
unsigned threads = qemu_opt_get_number(opts, "threads", 0);
+ unsigned cpus;
+
+ smp_cpus = qemu_opt_get_number(opts, "cpus", 0);
+ max_cpus = qemu_opt_get_number(opts, "maxcpus", 0);
+
+ cpus = max_cpus ? max_cpus : smp_cpus;
/* compute missing values, prefer sockets over cores over threads */
- if (cpus == 0 || sockets == 0) {
- sockets = sockets > 0 ? sockets : 1;
- cores = cores > 0 ? cores : 1;
- threads = threads > 0 ? threads : 1;
- if (cpus == 0) {
- cpus = cores * threads * sockets;
- }
- } else {
- if (cores == 0) {
- threads = threads > 0 ? threads : 1;
- cores = cpus / (sockets * threads);
- } else {
- threads = cpus / (cores * sockets);
- }
+ if (cpus == 0) {
+ cpus = sockets ? sockets : 1;
+ cpus = cpus * cores ? cpus * cores : cpus;
+ cpus = cpus * threads ? cpus * threads : cpus;
}
- max_cpus = qemu_opt_get_number(opts, "maxcpus", 0);
+ if (sockets == 0) {
+ sockets = 1;
+ }
- smp_cpus = cpus;
- smp_cores = cores > 0 ? cores : 1;
- smp_threads = threads > 0 ? threads : 1;
+ if (cores == 0) {
+ threads = threads ? threads : 1;
+ cores = cpus / (sockets * threads);
+ cores = cores ? cores : 1;
+ }
+
+ if (threads == 0) {
+ threads = cpus / (sockets * cores);
+ threads = threads ? threads : 1;
+ }
+
+ if (max_cpus == 0) {
+ max_cpus = sockets * cores * threads;
+ }
+ if (sockets * cores * threads != max_cpus) {
+ fprintf(stderr, "cpu topology: "
+ "sockets (%u) * cores (%u) * threads (%u) != max_cpus (%u)\n",
+ sockets, cores, threads, max_cpus);
+ exit(1);
+ }
+
+ smp_cpus = smp_cpus ? smp_cpus : max_cpus;
+ smp_cores = cores;
+ smp_threads = threads;
}
if (max_cpus == 0) {
@@ -1309,11 +1326,11 @@ static void smp_parse(QemuOpts *opts)
fprintf(stderr, "Unsupported number of maxcpus\n");
exit(1);
}
+
if (max_cpus < smp_cpus) {
fprintf(stderr, "maxcpus must be equal to or greater than smp\n");
exit(1);
}
-
}
static void realtime_init(void)
--
1.9.3
next reply other threads:[~2014-11-06 16:09 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-06 16:09 Andrew Jones [this message]
2014-11-06 19:17 ` [Qemu-devel] [PATCH] vl: rework smp_parse Eduardo Habkost
2014-11-07 9:22 ` Andrew Jones
2014-11-07 11:29 ` Andrew Jones
2014-11-06 22:11 ` Paolo Bonzini
2014-11-07 9:29 ` Andrew Jones
2014-11-07 9:40 ` Paolo Bonzini
2014-11-07 9:52 ` Andrew Jones
2014-11-07 11:21 ` Andrew Jones
2014-11-07 12:16 ` Eduardo Habkost
2014-11-07 12:23 ` Andrew Jones
2014-11-07 12:32 ` Eduardo Habkost
2014-11-07 16:03 ` Andrew Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1415290175-17314-1-git-send-email-drjones@redhat.com \
--to=drjones@redhat.com \
--cc=ehabkost@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).