From: Dieter Bloms <dieter@bloms.de>
To: Dieter Bloms <dieter@bloms.de>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
Dario Faggioli <dario.faggioli@citrix.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
Ian Campbell <Ian.Campbell@citrix.com>,
xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-users] xl doesn't honour the parameter cpu_weight from my config file while xm does honour it
Date: Tue, 24 Apr 2012 21:35:25 +0200 [thread overview]
Message-ID: <20120424193524.GA20565@bloms.de> (raw)
In-Reply-To: <20120424182633.GA20286@bloms.de>
[-- Attachment #1: Type: text/plain, Size: 716 bytes --]
Hi,
On Tue, Apr 24, Dieter Bloms wrote:
> Hi,
>
> On Tue, Apr 24, Ian Jackson wrote:
>
> > Perhaps it would be better to have a single sched_params struct which
> > contained all the parameters needed for any scheduler, and simply have
> > them ignored by libxl for schedulers we're not using.
...
> Anyway I think it is a good design to have one struct with all parameters
> and I'am willing to implement it.
I've made a new patch.
Maybe it fit all your needs.
--
best regards
Dieter Bloms
--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
>From field.
[-- Attachment #2: add_support_for_cpu_weight_config_in_xl.diff --]
[-- Type: text/x-diff, Size: 6982 bytes --]
libxl: set domain scheduling parameters while creating the domU
the domain specific scheduling parameters like cpu_weight, cap, slice, ...
will be set during creating the domain, so this parameters can be defined
in the domain config file
Signed-off-by: Dieter Bloms <dieter@bloms.de>
diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index e2cd251..b0c8064 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -112,6 +112,44 @@ List of which cpus the guest is allowed to use. Default behavior is
(all vcpus will run on cpus 0,2,3,5), or `cpus=["2", "3"]` (all vcpus
will run on cpus 2 and 3).
+=item B<cpu_weight=WEIGHT>
+
+A domain with a weight of 512 will get twice as much CPU as a domain
+with a weight of 256 on a contended host.
+Legal weights range from 1 to 65535 and the default is 256.
+Can be set for credit, credit2 and sedf scheduler.
+
+=item B<cap=N>
+
+The cap optionally fixes the maximum amount of CPU a domain will be
+able to consume, even if the host system has idle CPU cycles.
+The cap is expressed in percentage of one physical CPU:
+100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc.
+The default, 0, means there is no upper cap.
+Can be set for the credit and credit2 scheduler.
+
+=item B<period=NANOSECONDS>
+
+The normal EDF scheduling usage in nanoseconds. This means every period
+the domain gets cpu time defined in slice.
+Can be set for sedf scheduler.
+
+=item B<slice=NANOSECONDS>
+
+The normal EDF scheduling usage in nanoseconds. it defines the time
+a domain get every period time.
+Can be set for sedf scheduler.
+
+=item B<latency=N>
+
+Scaled period if domain is doing heavy I/O.
+Can be set for sedf scheduler.
+
+=item B<extratime=BOOLEAN>
+
+Flag for allowing domain to run in extra time.
+Can be set for sedf scheduler.
+
=item B<memory=MBYTES>
Start the guest with MBYTES megabytes of RAM.
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 0bdd654..55b033a 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -42,6 +42,40 @@ libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid)
return LIBXL_DOMAIN_TYPE_PV;
}
+int libxl__sched_set_params(libxl__gc *gc, uint32_t domid, libxl_sched_params *scparams)
+{
+ libxl_ctx *ctx = libxl__gc_owner(gc);
+ libxl_scheduler sched;
+ libxl_sched_sedf_domain sedf_info;
+ libxl_sched_credit_domain credit_info;
+ libxl_sched_credit2_domain credit2_info;
+ int ret;
+
+ sched = libxl_get_scheduler (ctx);
+ switch (sched) {
+ case LIBXL_SCHEDULER_SEDF:
+ sedf_info.period = scparams->period;
+ sedf_info.slice = scparams->slice;
+ sedf_info.latency = scparams->latency;
+ sedf_info.extratime = scparams->extratime;
+ sedf_info.weight = scparams->weight;
+ ret=libxl_sched_sedf_domain_set(ctx, domid, &sedf_info);
+ break;
+ case LIBXL_SCHEDULER_CREDIT:
+ credit_info.weight = scparams->weight;
+ credit_info.cap = scparams->cap;
+ ret=libxl_sched_credit_domain_set(ctx, domid, &credit_info);
+ break;
+ case LIBXL_SCHEDULER_CREDIT2:
+ credit2_info.weight = scparams->weight;
+ ret=libxl_sched_credit2_domain_set(ctx, domid, &credit2_info);
+ break;
+ default:
+ ret=-1;
+ }
+ return ret;
+}
+
int libxl__domain_shutdown_reason(libxl__gc *gc, uint32_t domid)
{
libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -126,6 +160,8 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
char **ents, **hvm_ents;
int i;
+ libxl__sched_set_params (gc, domid, &(info->sched_params));
+
libxl_cpuid_apply_policy(ctx, domid);
if (info->cpuid != NULL)
libxl_cpuid_set(ctx, domid, info->cpuid);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index a4b933b..2b76b0e 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -617,6 +617,7 @@ int libxl__atfork_init(libxl_ctx *ctx);
/* from xl_dom */
_hidden libxl_domain_type libxl__domain_type(libxl__gc *gc, uint32_t domid);
_hidden int libxl__domain_shutdown_reason(libxl__gc *gc, uint32_t domid);
+_hidden int libxl__sched_set_params(libxl__gc *gc, uint32_t domid, libxl_sched_params *scparams);
#define LIBXL__DOMAIN_IS_TYPE(gc, domid, type) \
libxl__domain_type((gc), (domid)) == LIBXL_DOMAIN_TYPE_##type
typedef struct {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 5cf9708..7789327 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -224,6 +224,17 @@ libxl_domain_create_info = Struct("domain_create_info",[
MemKB = UInt(64, init_val = "LIBXL_MEMKB_DEFAULT")
+libxl_sched_params = Struct("sched_params",[
+ ("weight", integer),
+ ("cap", integer),
+ ("tslice_ms", integer),
+ ("ratelimit_us", integer),
+ ("period", integer),
+ ("slice", integer),
+ ("latency", integer),
+ ("extratime", integer),
+ ], dir=DIR_IN)
+
# Instances of libxl_file_reference contained in this struct which
# have been mapped (with libxl_file_reference_map) will be unmapped
# by libxl_domain_build/restore. If either of these are never called
@@ -255,6 +266,8 @@ libxl_domain_build_info = Struct("domain_build_info",[
("extra_pv", libxl_string_list),
# extra parameters pass directly to qemu for HVM guest, NULL terminated
("extra_hvm", libxl_string_list),
+ # parameters for all type of scheduler
+ ("sched_params", libxl_sched_params),
("u", KeyedUnion(None, libxl_domain_type, "type",
[("hvm", Struct(None, [("firmware", string),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 5703512..8e67307 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -587,6 +587,23 @@ static void parse_config_data(const char *configfile_filename_report,
libxl_domain_build_info_init_type(b_info, c_info->type);
/* the following is the actual config parsing with overriding values in the structures */
+ if (!xlu_cfg_get_long (config, "cpu_weight", &l, 0))
+ b_info->sched_params.weight = l;
+ if (!xlu_cfg_get_long (config, "cap", &l, 0))
+ b_info->sched_params.cap = l;
+ if (!xlu_cfg_get_long (config, "tslice_ms", &l, 0))
+ b_info->sched_params.tslice_ms = l;
+ if (!xlu_cfg_get_long (config, "ratelimit_us", &l, 0))
+ b_info->sched_params.ratelimit_us = l;
+ if (!xlu_cfg_get_long (config, "period", &l, 0))
+ b_info->sched_params.period = l;
+ if (!xlu_cfg_get_long (config, "slice", &l, 0))
+ b_info->sched_params.period = l;
+ if (!xlu_cfg_get_long (config, "latency", &l, 0))
+ b_info->sched_params.period = l;
+ if (!xlu_cfg_get_long (config, "extratime", &l, 0))
+ b_info->sched_params.period = l;
+
if (!xlu_cfg_get_long (config, "vcpus", &l, 0)) {
b_info->max_vcpus = l;
b_info->cur_vcpus = (1 << l) - 1;
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2012-04-24 19:35 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20120420150012.GB3720@bloms.de>
2012-04-20 15:13 ` [Xen-users] xl doesn't honour the parameter cpu_weight from my config file while xm does honour it Ian Campbell
2012-04-20 15:23 ` Dieter Bloms
2012-04-23 9:46 ` Dieter Bloms
2012-04-23 12:04 ` Ian Campbell
2012-04-23 14:22 ` Dario Faggioli
2012-04-23 15:41 ` Dieter Bloms
2012-04-23 16:07 ` Dario Faggioli
2012-04-23 19:35 ` Dieter Bloms
2012-04-24 6:05 ` Dario Faggioli
2012-04-24 12:14 ` Dieter Bloms
2012-04-24 13:09 ` Ian Campbell
2012-04-24 14:33 ` Dieter Bloms
2012-04-24 14:51 ` Ian Campbell
2012-04-24 16:03 ` Ian Jackson
2012-04-24 16:15 ` Ian Campbell
2012-04-24 16:20 ` Ian Jackson
2012-04-24 16:27 ` Ian Campbell
2012-04-24 18:26 ` Dieter Bloms
2012-04-24 19:35 ` Dieter Bloms [this message]
2012-04-25 9:07 ` Ian Campbell
2012-04-25 10:40 ` Ian Jackson
2012-04-24 13:24 ` Ian Jackson
2012-04-24 13:27 ` Ian Campbell
2012-04-24 13:33 ` Ian Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120424193524.GA20565@bloms.de \
--to=dieter@bloms.de \
--cc=George.Dunlap@eu.citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).