From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
Juergen Gross <jgross@suse.com>, Meng Xu <mengxu@cis.upenn.edu>,
Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xen: sched: adjustments to some performance counters
Date: Fri, 25 Sep 2015 11:53:12 +0200 [thread overview]
Message-ID: <20150925095312.11655.93745.stgit@Solace.station> (raw)
More specifically:
1) rename vcpu_destroy to vcpu_remove
It seems this have had to be done as part of 7e6b926a
("cpupools: Make interface more consistent"), which
renamed the function but not the counter.
In fact, because of cpupools, vcpus are not only removed
from a scheduler when they are destroyed, but also when
domains move between pools.
Make the related statistics counter reflect that more
properly.
2) rename vcpu_init to vcpu_alloc
As it lives in *_alloc_vdata.
3) add vcpu_insert
matching vcpu_remove, and useful to quickly check
whether the number of insertions and removal matches,
or in general investigare their relationship.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Meng Xu <mengxu@cis.upenn.edu>
Cc: Jan Beulich <jbeulich@suse.com>
---
This would be v2 of "xen: sched: rename vcpu_destroy perf counter to
vcpu_remove", but both the title and the content changed, so I'm treating it as
a fresh submission (and not applying the Rev-by Juergen gave to it).
The original submission was only changing the name of vcpu_destroy to
vcpu_remove. This one, also changes vcpu_init to vcpu_alloc (as suggested by
Juergen) and adds vcpu_insert, for simmetry with vcpu_remove.
---
xen/common/sched_credit.c | 6 ++++--
xen/common/sched_credit2.c | 6 ++++--
xen/common/sched_rt.c | 6 ++++--
xen/include/xen/perfc_defn.h | 5 +++--
4 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index a1945ac..3bb28c0 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -896,7 +896,7 @@ csched_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
svc->pri = is_idle_domain(vc->domain) ?
CSCHED_PRI_IDLE : CSCHED_PRI_TS_UNDER;
SCHED_VCPU_STATS_RESET(svc);
- SCHED_STAT_CRANK(vcpu_init);
+ SCHED_STAT_CRANK(vcpu_alloc);
return svc;
}
@@ -907,6 +907,8 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
__runq_insert(vc->processor, svc);
+
+ SCHED_STAT_CRANK(vcpu_insert);
}
static void
@@ -927,7 +929,7 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
struct csched_dom * const sdom = svc->sdom;
unsigned long flags;
- SCHED_STAT_CRANK(vcpu_destroy);
+ SCHED_STAT_CRANK(vcpu_remove);
if ( test_and_clear_bit(CSCHED_FLAG_VCPU_PARKED, &svc->flags) )
{
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 75e0321..bf1fe6f 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -796,7 +796,7 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
svc->weight = 0;
}
- SCHED_STAT_CRANK(vcpu_init);
+ SCHED_STAT_CRANK(vcpu_alloc);
return svc;
}
@@ -891,6 +891,8 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
vcpu_schedule_unlock_irq(lock, vc);
sdom->nr_vcpus++;
+
+ SCHED_STAT_CRANK(vcpu_insert);
}
CSCHED2_VCPU_CHECK(vc);
@@ -917,7 +919,7 @@ csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
{
spinlock_t *lock;
- SCHED_STAT_CRANK(vcpu_destroy);
+ SCHED_STAT_CRANK(vcpu_remove);
/* Remove from runqueue */
lock = vcpu_schedule_lock_irq(vc);
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 4372486..6fe3bec 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -597,7 +597,7 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
if ( !is_idle_vcpu(vc) )
svc->budget = RTDS_DEFAULT_BUDGET;
- SCHED_STAT_CRANK(vcpu_init);
+ SCHED_STAT_CRANK(vcpu_alloc);
return svc;
}
@@ -635,6 +635,8 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
/* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
+
+ SCHED_STAT_CRANK(vcpu_insert);
}
/*
@@ -648,7 +650,7 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
struct rt_dom * const sdom = svc->sdom;
spinlock_t *lock;
- SCHED_STAT_CRANK(vcpu_destroy);
+ SCHED_STAT_CRANK(vcpu_remove);
BUG_ON( sdom == NULL );
diff --git a/xen/include/xen/perfc_defn.h b/xen/include/xen/perfc_defn.h
index 526002d..76ee803 100644
--- a/xen/include/xen/perfc_defn.h
+++ b/xen/include/xen/perfc_defn.h
@@ -19,8 +19,9 @@ PERFCOUNTER(sched_ctx, "sched: context switches")
PERFCOUNTER(schedule, "sched: specific scheduler")
PERFCOUNTER(dom_init, "sched: dom_init")
PERFCOUNTER(dom_destroy, "sched: dom_destroy")
-PERFCOUNTER(vcpu_init, "sched: vcpu_init")
-PERFCOUNTER(vcpu_destroy, "sched: vcpu_destroy")
+PERFCOUNTER(vcpu_alloc, "sched: vcpu_alloc")
+PERFCOUNTER(vcpu_insert, "sched: vcpu_insert")
+PERFCOUNTER(vcpu_remove, "sched: vcpu_remove")
PERFCOUNTER(vcpu_sleep, "sched: vcpu_sleep")
PERFCOUNTER(vcpu_wake_running, "sched: vcpu_wake_running")
PERFCOUNTER(vcpu_wake_onrunq, "sched: vcpu_wake_onrunq")
next reply other threads:[~2015-09-25 9:53 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-25 9:53 Dario Faggioli [this message]
2015-09-25 10:19 ` [PATCH] xen: sched: adjustments to some performance counters Juergen Gross
2015-09-29 14:24 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150925095312.11655.93745.stgit@Solace.station \
--to=dario.faggioli@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=mengxu@cis.upenn.edu \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).