* [PATCH] x86: make the dom0_max_vcpus option more flexible
@ 2012-05-04 16:01 David Vrabel
2012-05-04 16:12 ` Jan Beulich
0 siblings, 1 reply; 9+ messages in thread
From: David Vrabel @ 2012-05-04 16:01 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Jan Beulich
From: David Vrabel <david.vrabel@citrix.com>
The dom0_max_vcpus command line option only allows the exact number of
VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
but no more than the number physically present."
Add min: and max: prefixes to the option to set a minimum number of
VCPUs, and a maximum which does not exceed the number of PCPUs.
For example, with "dom0_max_vcpus=min:4,max:8":
PCPUs Dom0 VCPUs
2 4
4 4
6 6
8 8
10 8
The existing behaviour of "dom0_max_vcpus=N" still works as before.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
docs/misc/xen-command-line.markdown | 29 +++++++++++++++++++++++++++--
xen/arch/x86/domain_build.c | 23 ++++++++++++++++++++++-
2 files changed, 49 insertions(+), 3 deletions(-)
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index a6195f2..5f0c2cd 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -272,10 +272,35 @@ Specify the bit width of the DMA heap.
### dom0\_ioports\_disable
### dom0\_max\_vcpus
+
+Either:
+
> `= <integer>`
-Specify the maximum number of vcpus to give to dom0. This defaults
-to the number of pcpus on the host.
+The maximum number of VCPUs to give to dom0. This number of VCPUs can
+be more than the number of PCPUs on the host. The default is the
+number of PCPUs.
+
+Or:
+
+> `= List of ( min:<integer> | max:<integer> )`
+
+With the `min:` option dom0 will have at least this minimum number of
+VCPUs (default: 1). This may be more than the number of PCPUs on the
+host.
+
+With the `max:` option dom0 will have a VCPUs for each PCPUs but no
+more than this maximum number (default: unlimited).
+
+For example, with `dom0_max_vcpus=min:4,max:8`:
+
+ Number of
+ PCPUs | Dom0 VCPUs
+ 2 | 4
+ 4 | 4
+ 6 | 6
+ 8 | 8
+ 10 | 8
### dom0\_mem (ia64)
> `= <size>`
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b3c5d4c..5407f8d 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -83,7 +83,24 @@ static void __init parse_dom0_mem(const char *s)
custom_param("dom0_mem", parse_dom0_mem);
static unsigned int __initdata opt_dom0_max_vcpus;
-integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
+static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
+static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
+
+static void __init parse_dom0_max_vcpus(const char *s)
+{
+ do {
+ if ( !strncmp(s, "min:", 4) )
+ opt_dom0_max_vcpus_min = simple_strtoul(s+4, &s, 0);
+ else if ( !strncmp(s, "max:", 4) )
+ opt_dom0_max_vcpus_max = simple_strtoul(s+4, &s, 0);
+ else
+ opt_dom0_max_vcpus = simple_strtoul(s, &s, 0);
+ if ( *s != ',' )
+ break;
+ } while ( *s++ == ',' );
+
+}
+custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
struct vcpu *__init alloc_dom0_vcpu0(void)
{
@@ -91,6 +108,10 @@ struct vcpu *__init alloc_dom0_vcpu0(void)
opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
opt_dom0_max_vcpus = MAX_VIRT_CPUS;
+ if ( opt_dom0_max_vcpus_min > opt_dom0_max_vcpus )
+ opt_dom0_max_vcpus = opt_dom0_max_vcpus_min;
+ if ( opt_dom0_max_vcpus_max < opt_dom0_max_vcpus )
+ opt_dom0_max_vcpus = opt_dom0_max_vcpus_max;
dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
if ( !dom0->vcpu )
--
1.7.2.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-05-04 16:01 [PATCH] x86: make the dom0_max_vcpus option more flexible David Vrabel
@ 2012-05-04 16:12 ` Jan Beulich
2012-05-04 16:26 ` David Vrabel
0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2012-05-04 16:12 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel
>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@citrix.com> wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> The dom0_max_vcpus command line option only allows the exact number of
> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
> but no more than the number physically present."
>
> Add min: and max: prefixes to the option to set a minimum number of
> VCPUs, and a maximum which does not exceed the number of PCPUs.
>
> For example, with "dom0_max_vcpus=min:4,max:8":
Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
how about simply allowing a range along with a simple number (since
negative values make no sense, omitting either side of the range would
be supportable if necessary.
> PCPUs Dom0 VCPUs
> 2 4
> 4 4
> 6 6
> 8 8
> 10 8
>
> The existing behaviour of "dom0_max_vcpus=N" still works as before.
>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> docs/misc/xen-command-line.markdown | 29 +++++++++++++++++++++++++++--
> xen/arch/x86/domain_build.c | 23 ++++++++++++++++++++++-
> 2 files changed, 49 insertions(+), 3 deletions(-)
>
> diff --git a/docs/misc/xen-command-line.markdown
> b/docs/misc/xen-command-line.markdown
> index a6195f2..5f0c2cd 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -272,10 +272,35 @@ Specify the bit width of the DMA heap.
>
> ### dom0\_ioports\_disable
> ### dom0\_max\_vcpus
> +
> +Either:
> +
> > `= <integer>`
>
> -Specify the maximum number of vcpus to give to dom0. This defaults
> -to the number of pcpus on the host.
> +The maximum number of VCPUs to give to dom0. This number of VCPUs can
> +be more than the number of PCPUs on the host. The default is the
> +number of PCPUs.
> +
> +Or:
> +
> +> `= List of ( min:<integer> | max:<integer> )`
> +
> +With the `min:` option dom0 will have at least this minimum number of
> +VCPUs (default: 1). This may be more than the number of PCPUs on the
> +host.
> +
> +With the `max:` option dom0 will have a VCPUs for each PCPUs but no
> +more than this maximum number (default: unlimited).
> +
> +For example, with `dom0_max_vcpus=min:4,max:8`:
> +
> + Number of
> + PCPUs | Dom0 VCPUs
> + 2 | 4
> + 4 | 4
> + 6 | 6
> + 8 | 8
> + 10 | 8
>
> ### dom0\_mem (ia64)
> > `= <size>`
> diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
> index b3c5d4c..5407f8d 100644
> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -83,7 +83,24 @@ static void __init parse_dom0_mem(const char *s)
> custom_param("dom0_mem", parse_dom0_mem);
>
> static unsigned int __initdata opt_dom0_max_vcpus;
> -integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> +static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
> +static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
> +
> +static void __init parse_dom0_max_vcpus(const char *s)
> +{
> + do {
> + if ( !strncmp(s, "min:", 4) )
> + opt_dom0_max_vcpus_min = simple_strtoul(s+4, &s, 0);
> + else if ( !strncmp(s, "max:", 4) )
> + opt_dom0_max_vcpus_max = simple_strtoul(s+4, &s, 0);
> + else
> + opt_dom0_max_vcpus = simple_strtoul(s, &s, 0);
> + if ( *s != ',' )
> + break;
> + } while ( *s++ == ',' );
> +
> +}
> +custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
>
> struct vcpu *__init alloc_dom0_vcpu0(void)
> {
> @@ -91,6 +108,10 @@ struct vcpu *__init alloc_dom0_vcpu0(void)
> opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
> if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
> opt_dom0_max_vcpus = MAX_VIRT_CPUS;
> + if ( opt_dom0_max_vcpus_min > opt_dom0_max_vcpus )
> + opt_dom0_max_vcpus = opt_dom0_max_vcpus_min;
Enlarging the value after the MAX_VIRT_CPUS range check must
not be done. You probably simply want to move your addition up
two lines.
> + if ( opt_dom0_max_vcpus_max < opt_dom0_max_vcpus )
> + opt_dom0_max_vcpus = opt_dom0_max_vcpus_max;
But please avoid ...=max: (number lost for some reason) rendering
the box unbootable (I'd say a maximum of zero should be interpreted
as 1).
Jan
>
> dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
> if ( !dom0->vcpu )
> --
> 1.7.2.5
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-05-04 16:12 ` Jan Beulich
@ 2012-05-04 16:26 ` David Vrabel
2012-05-04 18:18 ` David Vrabel
2012-05-07 6:42 ` Jan Beulich
0 siblings, 2 replies; 9+ messages in thread
From: David Vrabel @ 2012-05-04 16:26 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel
On 04/05/12 17:12, Jan Beulich wrote:
>>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@citrix.com> wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> The dom0_max_vcpus command line option only allows the exact number of
>> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
>> but no more than the number physically present."
>>
>> Add min: and max: prefixes to the option to set a minimum number of
>> VCPUs, and a maximum which does not exceed the number of PCPUs.
>>
>> For example, with "dom0_max_vcpus=min:4,max:8":
>
> Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
> how about simply allowing a range along with a simple number (since
> negative values make no sense, omitting either side of the range would
> be supportable if necessary.
I was copying the way dom0_mem worked but yeah, it's not very pretty.
Is dom0_max_vcpus=<min>-<max> (e.g., dom0_max_vcpus=4-8) what you were
thinking of?
Using a single value would have to set both <min> and <max> or the
behaviour of the option changes (i.e., =N is the same as =N-N).
David
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-05-04 16:26 ` David Vrabel
@ 2012-05-04 18:18 ` David Vrabel
2012-05-07 8:19 ` Jan Beulich
2012-05-07 6:42 ` Jan Beulich
1 sibling, 1 reply; 9+ messages in thread
From: David Vrabel @ 2012-05-04 18:18 UTC (permalink / raw)
To: David Vrabel; +Cc: Jan Beulich, xen-devel
On 04/05/12 17:26, David Vrabel wrote:
> On 04/05/12 17:12, Jan Beulich wrote:
>>>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@citrix.com> wrote:
>>> From: David Vrabel <david.vrabel@citrix.com>
>>>
>>> The dom0_max_vcpus command line option only allows the exact number of
>>> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
>>> but no more than the number physically present."
>>>
>>> Add min: and max: prefixes to the option to set a minimum number of
>>> VCPUs, and a maximum which does not exceed the number of PCPUs.
>>>
>>> For example, with "dom0_max_vcpus=min:4,max:8":
>>
>> Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
>> how about simply allowing a range along with a simple number (since
>> negative values make no sense, omitting either side of the range would
>> be supportable if necessary.
>
> I was copying the way dom0_mem worked but yeah, it's not very pretty.
>
> Is dom0_max_vcpus=<min>-<max> (e.g., dom0_max_vcpus=4-8) what you were
> thinking of?
>
> Using a single value would have to set both <min> and <max> or the
> behaviour of the option changes (i.e., =N is the same as =N-N).
This is what I ended up with.
8<------------------------------
>From af1543965db76ab81139de7f072a7c4daf61157f Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Fri, 4 May 2012 16:09:52 +0100
Subject: [PATCH] x86: make the dom0_max_vcpus option more flexible
The dom0_max_vcpus command line option only allows the exact number of
VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
but no more than the number physically present."
Allow a range for the option to set a minimum number of VCPUs, and a
maximum which does not exceed the number of PCPUs.
For example, with "dom0_max_vcpus=4-8":
PCPUs Dom0 VCPUs
2 4
4 4
6 6
8 8
10 8
Existing command lines with "dom0_max_vcpus=N" still work as before.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
docs/misc/xen-command-line.markdown | 29 +++++++++++++++++++++--
xen/arch/x86/domain_build.c | 43 +++++++++++++++++++++++++---------
2 files changed, 57 insertions(+), 15 deletions(-)
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index a6195f2..4e4f713 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -272,10 +272,33 @@ Specify the bit width of the DMA heap.
### dom0\_ioports\_disable
### dom0\_max\_vcpus
-> `= <integer>`
-Specify the maximum number of vcpus to give to dom0. This defaults
-to the number of pcpus on the host.
+Either:
+
+> `= <integer>`.
+
+The number of VCPUs to give to dom0. This number of VCPUs can be more
+than the number of PCPUs on the host. The default is the number of
+PCPUs.
+
+Or:
+
+> `= <min>-<max>` where `<min>` and `<max>` are integers.
+
+Gives dom0 a number of VCPUs equal to the number of PCPUs, but always
+at least `<min>` and no more than `<max>`. Using `<min>` may give
+more VCPUs than PCPUs. `<min>` or `<max>` may be omitted and the
+defaults of 1 and unlimited respectively are used instead.
+
+For example, with `dom0_max_vcpus=4-8`:
+
+ Number of
+ PCPUs | Dom0 VCPUs
+ 2 | 4
+ 4 | 4
+ 6 | 6
+ 8 | 8
+ 10 | 8
### dom0\_mem (ia64)
> `= <size>`
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b3c5d4c..686b626 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -82,20 +82,39 @@ static void __init parse_dom0_mem(const char *s)
}
custom_param("dom0_mem", parse_dom0_mem);
-static unsigned int __initdata opt_dom0_max_vcpus;
-integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
+static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
+static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
+
+static void __init parse_dom0_max_vcpus(const char *s)
+{
+ if (*s == '-') /* -M */
+ opt_dom0_max_vcpus_max = simple_strtoul(s + 1, &s, 0);
+ else { /* N, N-, or N-M */
+ opt_dom0_max_vcpus_min = simple_strtoul(s, &s, 0);
+ if (*s++ == '\0') /* N */
+ opt_dom0_max_vcpus_max = opt_dom0_max_vcpus_min;
+ else if (*s != '\0') /* N-M */
+ opt_dom0_max_vcpus_max = simple_strtoul(s, &s, 0);
+ }
+}
+custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
struct vcpu *__init alloc_dom0_vcpu0(void)
{
- if ( opt_dom0_max_vcpus == 0 )
- opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
- if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
- opt_dom0_max_vcpus = MAX_VIRT_CPUS;
+ unsigned max_vcpus;
+
+ max_vcpus = num_cpupool_cpus(cpupool0);
+ if ( opt_dom0_max_vcpus_min > max_vcpus )
+ max_vcpus = opt_dom0_max_vcpus_min;
+ if ( opt_dom0_max_vcpus_max < max_vcpus )
+ max_vcpus = opt_dom0_max_vcpus_max;
+ if ( max_vcpus > MAX_VIRT_CPUS )
+ max_vcpus = MAX_VIRT_CPUS;
- dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
+ dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
if ( !dom0->vcpu )
return NULL;
- dom0->max_vcpus = opt_dom0_max_vcpus;
+ dom0->max_vcpus = max_vcpus;
return alloc_vcpu(dom0, 0, 0);
}
@@ -185,11 +204,11 @@ static unsigned long __init compute_dom0_nr_pages(
unsigned long max_pages = dom0_max_nrpages;
/* Reserve memory for further dom0 vcpu-struct allocations... */
- avail -= (opt_dom0_max_vcpus - 1UL)
+ avail -= (d->max_vcpus - 1UL)
<< get_order_from_bytes(sizeof(struct vcpu));
/* ...and compat_l4's, if needed. */
if ( is_pv_32on64_domain(d) )
- avail -= opt_dom0_max_vcpus - 1;
+ avail -= d->max_vcpus - 1;
/* Reserve memory for iommu_dom0_init() (rough estimate). */
if ( iommu_enabled )
@@ -883,10 +902,10 @@ int __init construct_dom0(
for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
shared_info(d, vcpu_info[i].evtchn_upcall_mask) = 1;
- printk("Dom0 has maximum %u VCPUs\n", opt_dom0_max_vcpus);
+ printk("Dom0 has maximum %u VCPUs\n", d->max_vcpus);
cpu = cpumask_first(cpupool0->cpu_valid);
- for ( i = 1; i < opt_dom0_max_vcpus; i++ )
+ for ( i = 1; i < d->max_vcpus; i++ )
{
cpu = cpumask_cycle(cpu, cpupool0->cpu_valid);
(void)alloc_vcpu(d, i, cpu);
--
1.7.2.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-05-04 18:18 ` David Vrabel
@ 2012-05-07 8:19 ` Jan Beulich
2012-05-08 13:58 ` David Vrabel
0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2012-05-07 8:19 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel
>>> On 04.05.12 at 20:18, David Vrabel <david.vrabel@citrix.com> wrote:
> On 04/05/12 17:26, David Vrabel wrote:
>> On 04/05/12 17:12, Jan Beulich wrote:
>>>>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@citrix.com> wrote:
>>>> From: David Vrabel <david.vrabel@citrix.com>
>>>>
>>>> The dom0_max_vcpus command line option only allows the exact number of
>>>> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
>>>> but no more than the number physically present."
>>>>
>>>> Add min: and max: prefixes to the option to set a minimum number of
>>>> VCPUs, and a maximum which does not exceed the number of PCPUs.
>>>>
>>>> For example, with "dom0_max_vcpus=min:4,max:8":
>>>
>>> Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
>>> how about simply allowing a range along with a simple number (since
>>> negative values make no sense, omitting either side of the range would
>>> be supportable if necessary.
>>
>> I was copying the way dom0_mem worked but yeah, it's not very pretty.
>>
>> Is dom0_max_vcpus=<min>-<max> (e.g., dom0_max_vcpus=4-8) what you were
>> thinking of?
>>
>> Using a single value would have to set both <min> and <max> or the
>> behaviour of the option changes (i.e., =N is the same as =N-N).
>
> This is what I ended up with.
>
> 8<------------------------------
> From af1543965db76ab81139de7f072a7c4daf61157f Mon Sep 17 00:00:00 2001
> From: David Vrabel <david.vrabel@citrix.com>
> Date: Fri, 4 May 2012 16:09:52 +0100
> Subject: [PATCH] x86: make the dom0_max_vcpus option more flexible
>
> The dom0_max_vcpus command line option only allows the exact number of
> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
> but no more than the number physically present."
>
> Allow a range for the option to set a minimum number of VCPUs, and a
> maximum which does not exceed the number of PCPUs.
>
> For example, with "dom0_max_vcpus=4-8":
>
> PCPUs Dom0 VCPUs
> 2 4
> 4 4
> 6 6
> 8 8
> 10 8
>
> Existing command lines with "dom0_max_vcpus=N" still work as before.
>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
But I'm not sure whether this qualifies for going in for 4.2...
Jan
> ---
> docs/misc/xen-command-line.markdown | 29 +++++++++++++++++++++--
> xen/arch/x86/domain_build.c | 43 +++++++++++++++++++++++++---------
> 2 files changed, 57 insertions(+), 15 deletions(-)
>
> diff --git a/docs/misc/xen-command-line.markdown
> b/docs/misc/xen-command-line.markdown
> index a6195f2..4e4f713 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -272,10 +272,33 @@ Specify the bit width of the DMA heap.
>
> ### dom0\_ioports\_disable
> ### dom0\_max\_vcpus
> -> `= <integer>`
>
> -Specify the maximum number of vcpus to give to dom0. This defaults
> -to the number of pcpus on the host.
> +Either:
> +
> +> `= <integer>`.
> +
> +The number of VCPUs to give to dom0. This number of VCPUs can be more
> +than the number of PCPUs on the host. The default is the number of
> +PCPUs.
> +
> +Or:
> +
> +> `= <min>-<max>` where `<min>` and `<max>` are integers.
> +
> +Gives dom0 a number of VCPUs equal to the number of PCPUs, but always
> +at least `<min>` and no more than `<max>`. Using `<min>` may give
> +more VCPUs than PCPUs. `<min>` or `<max>` may be omitted and the
> +defaults of 1 and unlimited respectively are used instead.
> +
> +For example, with `dom0_max_vcpus=4-8`:
> +
> + Number of
> + PCPUs | Dom0 VCPUs
> + 2 | 4
> + 4 | 4
> + 6 | 6
> + 8 | 8
> + 10 | 8
>
> ### dom0\_mem (ia64)
> > `= <size>`
> diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
> index b3c5d4c..686b626 100644
> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -82,20 +82,39 @@ static void __init parse_dom0_mem(const char *s)
> }
> custom_param("dom0_mem", parse_dom0_mem);
>
> -static unsigned int __initdata opt_dom0_max_vcpus;
> -integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
> +static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
> +static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
> +
> +static void __init parse_dom0_max_vcpus(const char *s)
> +{
> + if (*s == '-') /* -M */
> + opt_dom0_max_vcpus_max = simple_strtoul(s + 1, &s, 0);
> + else { /* N, N-, or N-M */
> + opt_dom0_max_vcpus_min = simple_strtoul(s, &s, 0);
> + if (*s++ == '\0') /* N */
> + opt_dom0_max_vcpus_max = opt_dom0_max_vcpus_min;
> + else if (*s != '\0') /* N-M */
> + opt_dom0_max_vcpus_max = simple_strtoul(s, &s, 0);
> + }
> +}
> +custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
>
> struct vcpu *__init alloc_dom0_vcpu0(void)
> {
> - if ( opt_dom0_max_vcpus == 0 )
> - opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
> - if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
> - opt_dom0_max_vcpus = MAX_VIRT_CPUS;
> + unsigned max_vcpus;
> +
> + max_vcpus = num_cpupool_cpus(cpupool0);
> + if ( opt_dom0_max_vcpus_min > max_vcpus )
> + max_vcpus = opt_dom0_max_vcpus_min;
> + if ( opt_dom0_max_vcpus_max < max_vcpus )
> + max_vcpus = opt_dom0_max_vcpus_max;
> + if ( max_vcpus > MAX_VIRT_CPUS )
> + max_vcpus = MAX_VIRT_CPUS;
>
> - dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
> + dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
> if ( !dom0->vcpu )
> return NULL;
> - dom0->max_vcpus = opt_dom0_max_vcpus;
> + dom0->max_vcpus = max_vcpus;
>
> return alloc_vcpu(dom0, 0, 0);
> }
> @@ -185,11 +204,11 @@ static unsigned long __init compute_dom0_nr_pages(
> unsigned long max_pages = dom0_max_nrpages;
>
> /* Reserve memory for further dom0 vcpu-struct allocations... */
> - avail -= (opt_dom0_max_vcpus - 1UL)
> + avail -= (d->max_vcpus - 1UL)
> << get_order_from_bytes(sizeof(struct vcpu));
> /* ...and compat_l4's, if needed. */
> if ( is_pv_32on64_domain(d) )
> - avail -= opt_dom0_max_vcpus - 1;
> + avail -= d->max_vcpus - 1;
>
> /* Reserve memory for iommu_dom0_init() (rough estimate). */
> if ( iommu_enabled )
> @@ -883,10 +902,10 @@ int __init construct_dom0(
> for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
> shared_info(d, vcpu_info[i].evtchn_upcall_mask) = 1;
>
> - printk("Dom0 has maximum %u VCPUs\n", opt_dom0_max_vcpus);
> + printk("Dom0 has maximum %u VCPUs\n", d->max_vcpus);
>
> cpu = cpumask_first(cpupool0->cpu_valid);
> - for ( i = 1; i < opt_dom0_max_vcpus; i++ )
> + for ( i = 1; i < d->max_vcpus; i++ )
> {
> cpu = cpumask_cycle(cpu, cpupool0->cpu_valid);
> (void)alloc_vcpu(d, i, cpu);
> --
> 1.7.2.5
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-05-07 8:19 ` Jan Beulich
@ 2012-05-08 13:58 ` David Vrabel
0 siblings, 0 replies; 9+ messages in thread
From: David Vrabel @ 2012-05-08 13:58 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel
On 07/05/12 09:19, Jan Beulich wrote:
>> Subject: [PATCH] x86: make the dom0_max_vcpus option more flexible
>>
>> The dom0_max_vcpus command line option only allows the exact number of
>> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
>> but no more than the number physically present."
>>
>> Allow a range for the option to set a minimum number of VCPUs, and a
>> maximum which does not exceed the number of PCPUs.
>>
>> For example, with "dom0_max_vcpus=4-8":
>>
>> PCPUs Dom0 VCPUs
>> 2 4
>> 4 4
>> 6 6
>> 8 8
>> 10 8
>>
>> Existing command lines with "dom0_max_vcpus=N" still work as before.
>>
>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>
> Acked-by: Jan Beulich <jbeulich@suse.com>
>
> But I'm not sure whether this qualifies for going in for 4.2...
I don't think it's a 4.2 candidate. I posted it now we need this
functionality in XenServer and I didn't want to change the command line
in a way that would be incompatible with upstream.
David
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-05-04 16:26 ` David Vrabel
2012-05-04 18:18 ` David Vrabel
@ 2012-05-07 6:42 ` Jan Beulich
1 sibling, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2012-05-07 6:42 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel
>>> On 04.05.12 at 18:26, David Vrabel <david.vrabel@citrix.com> wrote:
> On 04/05/12 17:12, Jan Beulich wrote:
>>>>> On 04.05.12 at 18:01, David Vrabel <david.vrabel@citrix.com> wrote:
>>> From: David Vrabel <david.vrabel@citrix.com>
>>>
>>> The dom0_max_vcpus command line option only allows the exact number of
>>> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
>>> but no more than the number physically present."
>>>
>>> Add min: and max: prefixes to the option to set a minimum number of
>>> VCPUs, and a maximum which does not exceed the number of PCPUs.
>>>
>>> For example, with "dom0_max_vcpus=min:4,max:8":
>>
>> Both "...max...=min:..." and "...max...=max:" look pretty odd to me;
>> how about simply allowing a range along with a simple number (since
>> negative values make no sense, omitting either side of the range would
>> be supportable if necessary.
>
> I was copying the way dom0_mem worked but yeah, it's not very pretty.
>
> Is dom0_max_vcpus=<min>-<max> (e.g., dom0_max_vcpus=4-8) what you were
> thinking of?
Yes.
> Using a single value would have to set both <min> and <max> or the
> behaviour of the option changes (i.e., =N is the same as =N-N).
That sounds plausible.
Jan
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH] x86: make the dom0_max_vcpus option more flexible
@ 2012-09-10 17:26 David Vrabel
2012-09-11 7:05 ` Jan Beulich
0 siblings, 1 reply; 9+ messages in thread
From: David Vrabel @ 2012-09-10 17:26 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel
From: David Vrabel <david.vrabel@citrix.com>
The dom0_max_vcpus command line option only allows the exact number of
VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
but no more than the number physically present."
Allow a range for the option to set a minimum number of VCPUs, and a
maximum which does not exceed the number of PCPUs.
For example, with "dom0_max_vcpus=4-8":
PCPUs Dom0 VCPUs
2 4
4 4
6 6
8 8
10 8
Existing command lines with "dom0_max_vcpus=N" still work as before
(and are equivalent to dom0_max_vcpus=N-N).
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v2:
- none, repost for Xen 4.3
---
docs/misc/xen-command-line.markdown | 29 +++++++++++++++++++++--
xen/arch/x86/domain_build.c | 43 +++++++++++++++++++++++++---------
2 files changed, 57 insertions(+), 15 deletions(-)
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 6599931..97a2bb2 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -383,10 +383,33 @@ Specify the bit width of the DMA heap.
Specify a list of IO ports to be excluded from dom0 access.
### dom0\_max\_vcpus
-> `= <integer>`
-Specify the maximum number of vcpus to give to dom0. This defaults
-to the number of pcpus on the host.
+Either:
+
+> `= <integer>`.
+
+The number of VCPUs to give to dom0. This number of VCPUs can be more
+than the number of PCPUs on the host. The default is the number of
+PCPUs.
+
+Or:
+
+> `= <min>-<max>` where `<min>` and `<max>` are integers.
+
+Gives dom0 a number of VCPUs equal to the number of PCPUs, but always
+at least `<min>` and no more than `<max>`. Using `<min>` may give
+more VCPUs than PCPUs. `<min>` or `<max>` may be omitted and the
+defaults of 1 and unlimited respectively are used instead.
+
+For example, with `dom0_max_vcpus=4-8`:
+
+ Number of
+ PCPUs | Dom0 VCPUs
+ 2 | 4
+ 4 | 4
+ 6 | 6
+ 8 | 8
+ 10 | 8
### dom0\_mem
> `= List of ( min:<size> | max:<size> | <size> )`
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b3c5d4c..686b626 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -82,20 +82,39 @@ static void __init parse_dom0_mem(const char *s)
}
custom_param("dom0_mem", parse_dom0_mem);
-static unsigned int __initdata opt_dom0_max_vcpus;
-integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
+static unsigned int __initdata opt_dom0_max_vcpus_min = 1;
+static unsigned int __initdata opt_dom0_max_vcpus_max = UINT_MAX;
+
+static void __init parse_dom0_max_vcpus(const char *s)
+{
+ if (*s == '-') /* -M */
+ opt_dom0_max_vcpus_max = simple_strtoul(s + 1, &s, 0);
+ else { /* N, N-, or N-M */
+ opt_dom0_max_vcpus_min = simple_strtoul(s, &s, 0);
+ if (*s++ == '\0') /* N */
+ opt_dom0_max_vcpus_max = opt_dom0_max_vcpus_min;
+ else if (*s != '\0') /* N-M */
+ opt_dom0_max_vcpus_max = simple_strtoul(s, &s, 0);
+ }
+}
+custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
struct vcpu *__init alloc_dom0_vcpu0(void)
{
- if ( opt_dom0_max_vcpus == 0 )
- opt_dom0_max_vcpus = num_cpupool_cpus(cpupool0);
- if ( opt_dom0_max_vcpus > MAX_VIRT_CPUS )
- opt_dom0_max_vcpus = MAX_VIRT_CPUS;
+ unsigned max_vcpus;
+
+ max_vcpus = num_cpupool_cpus(cpupool0);
+ if ( opt_dom0_max_vcpus_min > max_vcpus )
+ max_vcpus = opt_dom0_max_vcpus_min;
+ if ( opt_dom0_max_vcpus_max < max_vcpus )
+ max_vcpus = opt_dom0_max_vcpus_max;
+ if ( max_vcpus > MAX_VIRT_CPUS )
+ max_vcpus = MAX_VIRT_CPUS;
- dom0->vcpu = xzalloc_array(struct vcpu *, opt_dom0_max_vcpus);
+ dom0->vcpu = xzalloc_array(struct vcpu *, max_vcpus);
if ( !dom0->vcpu )
return NULL;
- dom0->max_vcpus = opt_dom0_max_vcpus;
+ dom0->max_vcpus = max_vcpus;
return alloc_vcpu(dom0, 0, 0);
}
@@ -185,11 +204,11 @@ static unsigned long __init compute_dom0_nr_pages(
unsigned long max_pages = dom0_max_nrpages;
/* Reserve memory for further dom0 vcpu-struct allocations... */
- avail -= (opt_dom0_max_vcpus - 1UL)
+ avail -= (d->max_vcpus - 1UL)
<< get_order_from_bytes(sizeof(struct vcpu));
/* ...and compat_l4's, if needed. */
if ( is_pv_32on64_domain(d) )
- avail -= opt_dom0_max_vcpus - 1;
+ avail -= d->max_vcpus - 1;
/* Reserve memory for iommu_dom0_init() (rough estimate). */
if ( iommu_enabled )
@@ -883,10 +902,10 @@ int __init construct_dom0(
for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
shared_info(d, vcpu_info[i].evtchn_upcall_mask) = 1;
- printk("Dom0 has maximum %u VCPUs\n", opt_dom0_max_vcpus);
+ printk("Dom0 has maximum %u VCPUs\n", d->max_vcpus);
cpu = cpumask_first(cpupool0->cpu_valid);
- for ( i = 1; i < opt_dom0_max_vcpus; i++ )
+ for ( i = 1; i < d->max_vcpus; i++ )
{
cpu = cpumask_cycle(cpu, cpupool0->cpu_valid);
(void)alloc_vcpu(d, i, cpu);
--
1.7.2.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH] x86: make the dom0_max_vcpus option more flexible
2012-09-10 17:26 David Vrabel
@ 2012-09-11 7:05 ` Jan Beulich
0 siblings, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2012-09-11 7:05 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel
>>> On 10.09.12 at 19:26, David Vrabel <david.vrabel@citrix.com> wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> The dom0_max_vcpus command line option only allows the exact number of
> VCPUs for dom0 to be set. It is not possible to say "up to N VCPUs
> but no more than the number physically present."
>
> Allow a range for the option to set a minimum number of VCPUs, and a
> maximum which does not exceed the number of PCPUs.
>
> For example, with "dom0_max_vcpus=4-8":
>
> PCPUs Dom0 VCPUs
> 2 4
> 4 4
> 6 6
> 8 8
> 10 8
>
> Existing command lines with "dom0_max_vcpus=N" still work as before
> (and are equivalent to dom0_max_vcpus=N-N).
>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> ---
> Changes since v2:
> - none, repost for Xen 4.3
Thanks for resending - it didn't get lost, just didn't get around
to apply it yet as I first wanted to have it in my local tree for
a short while.
Jan
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-09-11 7:05 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-04 16:01 [PATCH] x86: make the dom0_max_vcpus option more flexible David Vrabel
2012-05-04 16:12 ` Jan Beulich
2012-05-04 16:26 ` David Vrabel
2012-05-04 18:18 ` David Vrabel
2012-05-07 8:19 ` Jan Beulich
2012-05-08 13:58 ` David Vrabel
2012-05-07 6:42 ` Jan Beulich
-- strict thread matches above, loose matches on Subject: below --
2012-09-10 17:26 David Vrabel
2012-09-11 7:05 ` Jan Beulich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).