xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Vrabel <david.vrabel@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
Date: Fri, 4 May 2012 19:18:43 +0100	[thread overview]
Message-ID: <4FA41D83.2010809@citrix.com> (raw)
In-Reply-To: <4FA40321.3080201@citrix.com>

On 04/05/12 17:26, David Vrabel wrote:
> On 04/05/12 17:12, Jan Beulich wrote:
>>>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@citrix.com> wrote:
>>> From: David Vrabel <david.vrabel@citrix.com>
>>>
>>> The dom0_max_vcpus command line option only allows the exact number of
>>> VCPUs for dom0 to be set.  It is not possible to say "up to N VCPUs
>>> but no more than the number physically present."
>>>
>>> Add min: and max: prefixes to the option to set a minimum number of
>>> VCPUs, and a maximum which does not exceed the number of PCPUs.
>>>
>>> For example, with "dom0_max_vcpus=min:4,max:8":
>>
>> Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
>> how about simply allowing a range along with a simple number (since
>> negative values make no sense, omitting either side of the range would
>> be supportable if necessary.
> 
> I was copying the way dom0_mem worked but yeah, it's not very pretty.
> 
> Is dom0_max_vcpus=<min>-<max> (e.g., dom0_max_vcpus=4-8)  what you were
> thinking of?
> 
> Using a single value would have to set both <min> and <max> or the
> behaviour of the option changes (i.e., =N is the same as =N-N).

This is what I ended up with.

8<------------------------------
>From af1543965db76ab81139de7f072a7c4daf61157f Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 4 May 2012 16:09:52 +0100
Subject: [PATCH] x86: make the dom0_max_vcpus option more flexible

The dom0_max_vcpus command line option only allows the exact number of
VCPUs for dom0 to be set.  It is not possible to say "up to N VCPUs
but no more than the number physically present."

Allow a range for the option to set a minimum number of VCPUs, and a
maximum which does not exceed the number of PCPUs.

For example, with "dom0_max_vcpus=4-8":

    PCPUs  Dom0 VCPUs
     2      4
     4      4
     6      6
     8      8
    10      8

Existing command lines with "dom0_max_vcpus=N" still work as before.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 docs/misc/xen-command-line.markdown |   29 +++++++++++++++++++++--
 xen/arch/x86/domain_build.c         |   43 +++++++++++++++++++++++++---------
 2 files changed, 57 insertions(+), 15 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index a6195f2..4e4f713 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -272,10 +272,33 @@ Specify the bit width of the DMA heap.
 
 ### dom0\_ioports\_disable
 ### dom0\_max\_vcpus
-> `= <integer>`
 
-Specify the maximum number of vcpus to give to dom0.  This defaults
-to the number of pcpus on the host.
+Either:
+
+> `= <integer>`.
+
+The number of VCPUs to give to dom0.  This number of VCPUs can be more
+than the number of PCPUs on the host.  The default is the number of
+PCPUs.
+
+Or:
+
+> `= <min>-<max>` where `<min>` and `<max>` are integers.
+
+Gives dom0 a number of VCPUs equal to the number of PCPUs, but always
+at least `<min>` and no more than `<max>`.  Using `<min>` may give
+more VCPUs than PCPUs.  `<min>` or `<max>` may be omitted and the
+defaults of 1 and unlimited respectively are used instead.
+
+For example, with `dom0_max_vcpus=4-8`:
+
+     Number of
+  PCPUs | Dom0 VCPUs
+   2    |  4
+   4    |  4
+   6    |  6
+   8    |  8
+  10    |  8
 
 ### dom0\_mem (ia64)
 > `= <size>`
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b3c5d4c..686b626 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -82,20 +82,39 @@ static void __init parse_dom0_mem(const char *s)
 }
 custom_param("dom0_mem", parse_dom0_mem);
 
-static unsigned int __initdata opt_dom0_max_vcpus;
-integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
+static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
+static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
+
+static void __init parse_dom0_max_vcpus(const char *s)
+{
+    if (*s == '-')              /* -M */
+        opt_dom0_max_vcpus_max = simple_strtoul(s + 1, &s, 0);
+    else {                      /* N, N-, or N-M */
+        opt_dom0_max_vcpus_min = simple_strtoul(s, &s, 0);
+        if (*s++ == '\0')       /* N */
+            opt_dom0_max_vcpus_max = opt_dom0_max_vcpus_min;
+        else if (*s != '\0')    /* N-M */
+            opt_dom0_max_vcpus_max = simple_strtoul(s, &s, 0);
+    }
+}
+custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
 
 struct vcpu *__init alloc_dom0_vcpu0(void)
 {
-    if ( opt_dom0_max_vcpus == 0 )
-        opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
-    if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
-        opt_dom0_max_vcpus = MAX_VIRT_CPUS;
+    unsigned max_vcpus;
+
+    max_vcpus = num_cpupool_cpus(cpupool0);
+    if ( opt_dom0_max_vcpus_min > max_vcpus )
+        max_vcpus = opt_dom0_max_vcpus_min;
+    if ( opt_dom0_max_vcpus_max < max_vcpus )
+        max_vcpus = opt_dom0_max_vcpus_max;
+    if ( max_vcpus > MAX_VIRT_CPUS )
+        max_vcpus = MAX_VIRT_CPUS;
 
-    dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
+    dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
     if ( !dom0->vcpu )
         return NULL;
-    dom0->max_vcpus = opt_dom0_max_vcpus;
+    dom0->max_vcpus = max_vcpus;
 
     return alloc_vcpu(dom0, 0, 0);
 }
@@ -185,11 +204,11 @@ static unsigned long __init compute_dom0_nr_pages(
     unsigned long max_pages = dom0_max_nrpages;
 
     /* Reserve memory for further dom0 vcpu-struct allocations... */
-    avail -= (opt_dom0_max_vcpus - 1UL)
+    avail -= (d->max_vcpus - 1UL)
              << get_order_from_bytes(sizeof(struct vcpu));
     /* ...and compat_l4's, if needed. */
     if ( is_pv_32on64_domain(d) )
-        avail -= opt_dom0_max_vcpus - 1;
+        avail -= d->max_vcpus - 1;
 
     /* Reserve memory for iommu_dom0_init() (rough estimate). */
     if ( iommu_enabled )
@@ -883,10 +902,10 @@ int __init construct_dom0(
     for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
         shared_info(d, vcpu_info[i].evtchn_upcall_mask) = 1;
 
-    printk("Dom0 has maximum %u VCPUs\n", opt_dom0_max_vcpus);
+    printk("Dom0 has maximum %u VCPUs\n", d->max_vcpus);
 
     cpu = cpumask_first(cpupool0->cpu_valid);
-    for ( i = 1; i < opt_dom0_max_vcpus; i++ )
+    for ( i = 1; i < d->max_vcpus; i++ )
     {
         cpu = cpumask_cycle(cpu, cpupool0->cpu_valid);
         (void)alloc_vcpu(d, i, cpu);
-- 
1.7.2.5

  reply	other threads:[~2012-05-04 18:18 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-04 16:01 [PATCH] x86: make the dom0_max_vcpus option more flexible David Vrabel
2012-05-04 16:12 ` Jan Beulich
2012-05-04 16:26   ` David Vrabel
2012-05-04 18:18     ` David Vrabel [this message]
2012-05-07  8:19       ` Jan Beulich
2012-05-08 13:58         ` David Vrabel
2012-05-07  6:42     ` Jan Beulich
  -- strict thread matches above, loose matches on Subject: below --
2012-09-10 17:26 David Vrabel
2012-09-11  7:05 ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FA41D83.2010809@citrix.com \
    --to=david.vrabel@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).