From: Juergen Gross <jgross@suse.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues
Date: Fri, 9 Oct 2015 07:31:28 +0200 [thread overview]
Message-ID: <56175130.3060309@suse.com> (raw)
In-Reply-To: <20151008125251.12522.3916.stgit@Solace.station>
On 10/08/2015 02:52 PM, Dario Faggioli wrote:
> Idle vCPUs should never really be explicitly inserted
> in any of the schedulers' runqueue. In fact, they are
> just put in execution immediately (either on boot, or
> when a pCPU is assigned to a cpupool), and it will be
> the first scheduling decision that --by preempting
> them-- will 'park' them in the queue.
>
> In fact, avoiding inserting those vCPUs in runqueues
> is something that Credit2 and RTDS intercept and
> forbid already (when implementing insert_vcpu()).
> Credit1 does that implicitly, as the idle vCPU will
> always be already running when insert_vcpu() is called,
> and hence not put in the runqueue.
>
> Let's make it *always* explicit, as that simplifies
> things by quite a bit. For instance, we can now
> introduce some BUG_ON() guards, with the twofold
> objective of making this assumption clear, and of
> catching misuse and bugs.
>
> The check for whether we'd be inserting an idle vCPU
> in a queue, now, happens, once and for all schedulers,
> in generic code, at vCPU initialization time, while
> we can just avoid trying (and always failing!) doing
> so when doing cpupools manipulations.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> xen/common/sched_credit.c | 2 ++
> xen/common/sched_credit2.c | 25 ++++++++++---------------
> xen/common/sched_rt.c | 4 +---
> xen/common/schedule.c | 21 +++++++++++----------
> 4 files changed, 24 insertions(+), 28 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index fccb368..fc447a7 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -905,6 +905,8 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> struct csched_vcpu *svc = vc->sched_priv;
> spinlock_t *lock;
>
> + BUG_ON( is_idle_vcpu(vc) );
> +
> lock = vcpu_schedule_lock_irq(vc);
>
> if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 912e1a2..234f798 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -870,30 +870,25 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> {
> struct csched2_vcpu *svc = vc->sched_priv;
> struct csched2_dom * const sdom = svc->sdom;
> + spinlock_t *lock;
>
> printk("%s: Inserting %pv\n", __func__, vc);
>
> - /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
> - * been called for that cpu.
> - */
> - if ( ! is_idle_vcpu(vc) )
> - {
> - spinlock_t *lock;
> + BUG_ON(is_idle_vcpu(vc));
>
> - /* FIXME: Do we need the private lock here? */
> - list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
> + /* FIXME: Do we need the private lock here? */
> + list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
>
> - /* Add vcpu to runqueue of initial processor */
> - lock = vcpu_schedule_lock_irq(vc);
> + /* Add vcpu to runqueue of initial processor */
> + lock = vcpu_schedule_lock_irq(vc);
>
> - runq_assign(ops, vc);
> + runq_assign(ops, vc);
>
> - vcpu_schedule_unlock_irq(lock, vc);
> + vcpu_schedule_unlock_irq(lock, vc);
>
> - sdom->nr_vcpus++;
> + sdom->nr_vcpus++;
>
> - SCHED_STAT_CRANK(vcpu_insert);
> - }
> + SCHED_STAT_CRANK(vcpu_insert);
>
> CSCHED2_VCPU_CHECK(vc);
> }
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 1086399..37a32a4 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -624,9 +624,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> s_time_t now = NOW();
> spinlock_t *lock;
>
> - /* not addlocate idle vcpu to dom vcpu list */
> - if ( is_idle_vcpu(vc) )
> - return;
> + BUG_ON( is_idle_vcpu(vc) );
>
> lock = vcpu_schedule_lock_irq(vc);
> if ( now >= svc->cur_deadline )
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 9aa209d..55daf73 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
> init_timer(&v->poll_timer, poll_timer_fn,
> v, v->processor);
>
> - /* Idle VCPUs are scheduled immediately. */
> + v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
> + if ( v->sched_priv == NULL )
> + return 1;
> +
> + TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
> +
> + /* Idle VCPUs are scheduled immediately, not inserting them in runqueue. */
> if ( is_idle_domain(d) )
> {
> per_cpu(schedule_data, v->processor).curr = v;
> v->is_running = 1;
> }
> -
> - TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
> -
> - v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
> - if ( v->sched_priv == NULL )
> - return 1;
> -
> - SCHED_OP(DOM2OP(d), insert_vcpu, v);
> + else
> + {
> + SCHED_OP(DOM2OP(d), insert_vcpu, v);
> + }
>
> return 0;
> }
> @@ -1514,7 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
> per_cpu(schedule_data, cpu).sched_priv = ppriv;
> SCHED_OP(new_ops, tick_resume, cpu);
> - SCHED_OP(new_ops, insert_vcpu, idle);
>
> SCHED_OP(old_ops, free_vdata, vpriv_old);
> SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
next prev parent reply other threads:[~2015-10-09 5:31 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
2015-10-08 12:52 ` [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
2015-10-08 13:16 ` Andrew Cooper
2015-10-08 12:52 ` [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
2015-10-08 13:18 ` Andrew Cooper
2015-10-08 15:16 ` George Dunlap
2015-10-08 15:49 ` Dario Faggioli
2015-10-08 20:12 ` Dario Faggioli
2015-10-08 12:52 ` [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
2015-10-08 15:27 ` George Dunlap
2015-10-08 15:39 ` Dario Faggioli
2015-10-09 5:31 ` Juergen Gross [this message]
2015-10-08 12:52 ` [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
2015-10-08 13:47 ` Andrew Cooper
2015-10-08 15:31 ` George Dunlap
2015-10-08 12:53 ` [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
2015-10-08 13:10 ` Andrew Cooper
2015-10-08 13:17 ` Dario Faggioli
2015-10-08 13:56 ` Andrew Cooper
2015-10-08 15:32 ` Dario Faggioli
2015-10-08 15:39 ` Andrew Cooper
2015-10-08 15:40 ` George Dunlap
2015-10-08 12:53 ` [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump Dario Faggioli
2015-10-08 15:42 ` George Dunlap
2015-10-08 15:59 ` Dario Faggioli
2015-10-08 12:53 ` [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW() Dario Faggioli
2015-10-08 13:12 ` Andrew Cooper
2015-10-08 15:37 ` Jan Beulich
2015-10-09 5:09 ` Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56175130.3060309@suse.com \
--to=jgross@suse.com \
--cc=dario.faggioli@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).