From: George Dunlap <George.Dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Subject: [RFC] Credit1: Make weight per-vcpu
Date: Thu, 12 Aug 2010 17:29:55 +0100 [thread overview]
Message-ID: <AANLkTi=oOnfnGh=2b8hX9XhTb-jbsFiks6MObzL6QSEC@mail.gmail.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 1166 bytes --]
At the moment, the "weight" parameter for a VM is set on a per-VM
basis. This means that when cpu time is scarce, two VMs with the same
weight will be given the same amount of total cpu time, no matter how
many vcpus it has. I.e., if a VM has 1 vcpu, that vcpu will get x% of
cpu time; if a VM has 2 vcpus, each vcpu will get (x/2)% of the cpu
time.
I believe this is a counter-intuitive interface. Users often choose
to add vcpus; when they do so, it's with the expectation that a VM
will need and use more cpu time. In my experience, however, users
rarely change the weight parameter. So the normal course of events is
for a user to decide a VM needs more processing power, add more cpus,
but doesn't change the weight. The VM still gets the same amount of
cpu time, but less efficiently allocated (because it's divided).
The attached patch changes the meaning of the "weight" parameter, to
be per-vcpu. Each vcpu is given the weight. So if you add an extra
vcpu, your VM will get more cpu time as well.
This patch has been in Citrix XenServer for several releases now
(checked in June 2008), and seems to fit more with customer
expectations.
-George
[-- Attachment #2: scheduler.per-vcpu-weight --]
[-- Type: application/octet-stream, Size: 3245 bytes --]
Credit1: Make weight per-vcpu
Change the meaning of credit1's "weight" parameter to be per-vcpu,
rather than per-VM.
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
diff -r f45026ec8db5 xen/common/sched_credit.c
--- a/xen/common/sched_credit.c Mon Aug 09 18:29:50 2010 +0100
+++ b/xen/common/sched_credit.c Tue Aug 10 15:23:24 2010 +0100
@@ -555,10 +555,11 @@
sdom->active_vcpu_count++;
list_add(&svc->active_vcpu_elem, &sdom->active_vcpu);
+ /* Make weight per-vcpu */
+ prv->weight += sdom->weight;
if ( list_empty(&sdom->active_sdom_elem) )
{
list_add(&sdom->active_sdom_elem, &prv->active_sdom);
- prv->weight += sdom->weight;
}
}
@@ -576,13 +577,13 @@
CSCHED_VCPU_STAT_CRANK(svc, state_idle);
CSCHED_STAT_CRANK(acct_vcpu_idle);
+ BUG_ON( prv->weight < sdom->weight );
sdom->active_vcpu_count--;
list_del_init(&svc->active_vcpu_elem);
+ prv->weight -= sdom->weight;
if ( list_empty(&sdom->active_vcpu) )
{
- BUG_ON( prv->weight < sdom->weight );
list_del_init(&sdom->active_sdom_elem);
- prv->weight -= sdom->weight;
}
}
@@ -804,8 +805,8 @@
{
if ( !list_empty(&sdom->active_sdom_elem) )
{
- prv->weight -= sdom->weight;
- prv->weight += op->u.credit.weight;
+ prv->weight -= sdom->weight * sdom->active_vcpu_count;
+ prv->weight += op->u.credit.weight * sdom->active_vcpu_count;
}
sdom->weight = op->u.credit.weight;
}
@@ -976,9 +977,9 @@
BUG_ON( is_idle_domain(sdom->dom) );
BUG_ON( sdom->active_vcpu_count == 0 );
BUG_ON( sdom->weight == 0 );
- BUG_ON( sdom->weight > weight_left );
+ BUG_ON( (sdom->weight * sdom->active_vcpu_count) > weight_left );
- weight_left -= sdom->weight;
+ weight_left -= ( sdom->weight * sdom->active_vcpu_count );
/*
* A domain's fair share is computed using its weight in competition
@@ -991,7 +992,9 @@
credit_peak = sdom->active_vcpu_count * CSCHED_CREDITS_PER_ACCT;
if ( prv->credit_balance < 0 )
{
- credit_peak += ( ( -prv->credit_balance * sdom->weight) +
+ credit_peak += ( ( -prv->credit_balance
+ * sdom->weight
+ * sdom->active_vcpu_count) +
(weight_total - 1)
) / weight_total;
}
@@ -1002,11 +1005,15 @@
if ( credit_cap < credit_peak )
credit_peak = credit_cap;
+ /* FIXME -- set cap per-vcpu as well...? */
credit_cap = ( credit_cap + ( sdom->active_vcpu_count - 1 )
) / sdom->active_vcpu_count;
}
- credit_fair = ( ( credit_total * sdom->weight) + (weight_total - 1)
+ credit_fair = ( ( credit_total
+ * sdom->weight
+ * sdom->active_vcpu_count )
+ + (weight_total - 1)
) / weight_total;
if ( credit_fair < credit_peak )
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
reply other threads:[~2010-08-12 16:29 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='AANLkTi=oOnfnGh=2b8hX9XhTb-jbsFiks6MObzL6QSEC@mail.gmail.com' \
--to=george.dunlap@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).