* [PATCH v2 7/7] xl: docs for xl config vnuma options
@ 2013-11-14 3:27 Elena Ufimtseva
2013-11-14 23:31 ` George Dunlap
0 siblings, 1 reply; 2+ messages in thread
From: Elena Ufimtseva @ 2013-11-14 3:27 UTC (permalink / raw)
To: xen-devel
Cc: lccycc123, george.dunlap, msw, dario.faggioli, stefano.stabellini,
Elena Ufimtseva
Documentation added to xl command regarding usage of vnuma
configuration options.
Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
docs/man/xl.cfg.pod.5 | 55 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index d2d8921..db25521 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -216,6 +216,61 @@ if the values of B<memory=> and B<maxmem=> differ.
A "pre-ballooned" HVM guest needs a balloon driver, without a balloon driver
it will crash.
+=item B<vnuma_nodes=N>
+
+Number of vNUMA nodes the guest will be initialized with on boot. In general case,
+this is the only required option. If only this option is given, all other
+vNUMA topology parameters will be taken as default.
+
+=item B<vnuma_mem=[vmem1, vmem2, ...]>
+
+The vnode memory sizes defined in MBytes. If the sum of all vnode memories
+does not match the domain memory or not all the nodes defined here, the total
+memory will be equally split between vnodes.
+
+Example: vnuma_mem=[1024, 1024, 2048, 2048]
+
+=item B<vdistance=[d1, d2, ... ,dn]>
+
+Defines the distance table for vNUMA nodes. Distance for NUMA machines usually
+ represented by two dimensional array and all distance may be spcified in one
+line here, by rows. In short, distance can be specified as two numbers [d1, d2],
+where d1 is same node distance, d2 is a value for all other distances.
+If vdistance was specified with errors, the defaul distance in use, e.g. [10, 20].
+
+Examples:
+vnodes = 3
+vdistance=[10, 20]
+will expand to this distance table (this is default setting as well):
+[10, 20, 20]
+[20, 10, 20]
+[20, 20, 10]
+
+=item B<vnuma_vcpumap=[vcpu1, vcpu2, ...]>
+
+Defines vcpu to vnode mapping as a string of integers, representing node
+numbers. If not defined, the vcpus are interleaved over the virtual nodes.
+Current limitation: vNUMA nodes have to have at least one vcpu, otherwise
+default vcpu_to_vnode will be used.
+Example:
+to map 4 vcpus to 2 nodes - 0,1 vcpu -> vnode1, 2,3 vcpu -> vnode2:
+vnuma_vcpumap = [0, 0, 1, 1]
+
+=item B<vnuma_vnodemap=[p1, p2, ..., pn]>
+
+vnode to pnode mapping. Can be configured if manual vnode allocation
+required. Will be only taken into effect on real NUMA machines and if
+memory or other constraints do not prevent it. If the mapping is fine,
+automatic NUMA placement will be disabled. If the mapping incorrect,
+automatic NUMA placement will be taken into account when selecting
+physical nodes for allocation, or mask will not be used on non-NUMA
+machines or if automatic allocation fails.
+
+Example:
+assume two node NUMA node machine:
+vnuma_vndoemap=[1, 0]
+first vnode will be placed on node 1, second on node0.
+
=back
=head3 Event Actions
--
1.7.10.4
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v2 7/7] xl: docs for xl config vnuma options
2013-11-14 3:27 [PATCH v2 7/7] xl: docs for xl config vnuma options Elena Ufimtseva
@ 2013-11-14 23:31 ` George Dunlap
0 siblings, 0 replies; 2+ messages in thread
From: George Dunlap @ 2013-11-14 23:31 UTC (permalink / raw)
To: Elena Ufimtseva
Cc: msw, dario.faggioli, stefano.stabellini, lccycc123, xen-devel
On 11/14/2013 03:27 AM, Elena Ufimtseva wrote:
> Documentation added to xl command regarding usage of vnuma
> configuration options.
>
> Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
> ---
> docs/man/xl.cfg.pod.5 | 55 +++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 55 insertions(+)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index d2d8921..db25521 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -216,6 +216,61 @@ if the values of B<memory=> and B<maxmem=> differ.
> A "pre-ballooned" HVM guest needs a balloon driver, without a balloon driver
> it will crash.
>
> +=item B<vnuma_nodes=N>
> +
> +Number of vNUMA nodes the guest will be initialized with on boot. In general case,
> +this is the only required option. If only this option is given, all other
> +vNUMA topology parameters will be taken as default.
> +
> +=item B<vnuma_mem=[vmem1, vmem2, ...]>
> +
> +The vnode memory sizes defined in MBytes. If the sum of all vnode memories
> +does not match the domain memory or not all the nodes defined here, the total
> +memory will be equally split between vnodes.
So the general approach here -- "invalid or empty configurations go to
default" -- isn't quite right. Invalid configurations should throw an
error that stops guest creation. Only unspecified configurations should
go to the default; and the text should say something like, "If
unspecified, the default will be the total memory split equally between
vnodes."
Same with the other options, with one exception...
> +
> +Example: vnuma_mem=[1024, 1024, 2048, 2048]
> +
> +=item B<vdistance=[d1, d2, ... ,dn]>
> +
> +Defines the distance table for vNUMA nodes. Distance for NUMA machines usually
> + represented by two dimensional array and all distance may be spcified in one
> +line here, by rows. In short, distance can be specified as two numbers [d1, d2],
> +where d1 is same node distance, d2 is a value for all other distances.
> +If vdistance was specified with errors, the defaul distance in use, e.g. [10, 20].
> +
> +Examples:
> +vnodes = 3
> +vdistance=[10, 20]
> +will expand to this distance table (this is default setting as well):
> +[10, 20, 20]
> +[20, 10, 20]
> +[20, 20, 10]
> +
> +=item B<vnuma_vcpumap=[vcpu1, vcpu2, ...]>
> +
> +Defines vcpu to vnode mapping as a string of integers, representing node
> +numbers. If not defined, the vcpus are interleaved over the virtual nodes.
> +Current limitation: vNUMA nodes have to have at least one vcpu, otherwise
> +default vcpu_to_vnode will be used.
> +Example:
> +to map 4 vcpus to 2 nodes - 0,1 vcpu -> vnode1, 2,3 vcpu -> vnode2:
> +vnuma_vcpumap = [0, 0, 1, 1]
> +
> +=item B<vnuma_vnodemap=[p1, p2, ..., pn]>
> +
> +vnode to pnode mapping. Can be configured if manual vnode allocation
> +required. Will be only taken into effect on real NUMA machines and if
> +memory or other constraints do not prevent it. If the mapping is fine,
> +automatic NUMA placement will be disabled. If the mapping incorrect,
> +automatic NUMA placement will be taken into account when selecting
> +physical nodes for allocation, or mask will not be used on non-NUMA
> +machines or if automatic allocation fails.
I think by default, if a vnode->pnode mapping is given that can't be
satisfied, we should throw an error. But it may make sense to add a
flag, either in the config file or on the command-line, that will allow
a "fall-back" to the automatic placement if the specified placement
can't be satisfied. (If this is complicated, it can wait to be added in
later.)
-George
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-11-14 23:31 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-14 3:27 [PATCH v2 7/7] xl: docs for xl config vnuma options Elena Ufimtseva
2013-11-14 23:31 ` George Dunlap
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).