* [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
@ 2015-10-08 12:52 ` Dario Faggioli
2015-10-08 13:16 ` Andrew Cooper
2015-10-08 12:52 ` [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
` (5 subsequent siblings)
6 siblings, 1 reply; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:52 UTC (permalink / raw)
To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Jan Beulich
In fact, csched_vcpu_remove() (i.e., the credit1
implementation of remove_vcpu()) manipulates runqueues,
so holding the runqueue lock is necessary.
Also, while there, *_lock_irq() (for the private lock) is
enough, there is no need to *_lock_irqsave().
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
---
Changes from v1 (within the other series):
* split the patch (wrt the original patch, in the original
series), and take care, in this one, only of remove_vcpu();
* removed pointless parentheses.
---
xen/common/sched_credit.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index b8f28fe..6f71e0d 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -926,7 +926,7 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
struct csched_private *prv = CSCHED_PRIV(ops);
struct csched_vcpu * const svc = CSCHED_VCPU(vc);
struct csched_dom * const sdom = svc->sdom;
- unsigned long flags;
+ spinlock_t *lock;
SCHED_STAT_CRANK(vcpu_remove);
@@ -936,15 +936,19 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
vcpu_unpause(svc->vcpu);
}
+ lock = vcpu_schedule_lock_irq(vc);
+
if ( __vcpu_on_runq(svc) )
__runq_remove(svc);
- spin_lock_irqsave(&(prv->lock), flags);
+ vcpu_schedule_unlock_irq(lock, vc);
+
+ spin_lock_irq(&prv->lock);
if ( !list_empty(&svc->active_vcpu_elem) )
__csched_vcpu_acct_stop_locked(prv, svc);
- spin_unlock_irqrestore(&(prv->lock), flags);
+ spin_unlock_irq(&prv->lock);
BUG_ON( sdom == NULL );
BUG_ON( !list_empty(&svc->runq_elem) );
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1
2015-10-08 12:52 ` [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
@ 2015-10-08 13:16 ` Andrew Cooper
0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 13:16 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Jan Beulich
On 08/10/15 13:52, Dario Faggioli wrote:
> In fact, csched_vcpu_remove() (i.e., the credit1
> implementation of remove_vcpu()) manipulates runqueues,
> so holding the runqueue lock is necessary.
>
> Also, while there, *_lock_irq() (for the private lock) is
> enough, there is no need to *_lock_irqsave().
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
2015-10-08 12:52 ` [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
@ 2015-10-08 12:52 ` Dario Faggioli
2015-10-08 13:18 ` Andrew Cooper
2015-10-08 15:16 ` George Dunlap
2015-10-08 12:52 ` [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
` (4 subsequent siblings)
6 siblings, 2 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:52 UTC (permalink / raw)
To: xen-devel; +Cc: George Dunlap, Andrew Cooper, Meng Xu, Jan Beulich
The insert_vcpu scheduler hook is called with an inconsistent
locking strategy. In fact, it is sometimes invoked while
holding the runqueue lock and sometimes when that is not the
case.
For instance, in case of schedule_cpu_switch() the lock is
acquired in generic code. On the other hand, in case of
sched_move_domain(), locking is left as a responsibility
of the schedulers implementing the hook.
This results in Credit1 and RTDS schedulers ending up (in
case of sched_move_domain()) doing runqueue manipulation
without holding any runqueue lock, which is a bug. (Credit2
was doing the locking by itself already.)
The right thing is to defer locking to the specific schedulers,
as it's them that know what, how and when it is best to lock
(as in: runqueue locks, vs. private scheduler locks, vs. both,
etc.).
This patch, therefore:
- removes any locking around insert_vcpu() from generic
code;
- add proper locking in the hook implementations, for
both Credit1 and RTDS.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Meng Xu <mengxu@cis.upenn.edu>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
---
Changes from v1 (within the other series):
* split the patch (wrt the original patch, in the original
series), and take care, in this one, only of insert_vcpu();
---
xen/common/sched_credit.c | 5 +++++
xen/common/sched_rt.c | 3 +++
xen/common/schedule.c | 6 ------
3 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 6f71e0d..fccb368 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -903,10 +903,15 @@ static void
csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
{
struct csched_vcpu *svc = vc->sched_priv;
+ spinlock_t *lock;
+
+ lock = vcpu_schedule_lock_irq(vc);
if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
__runq_insert(vc->processor, svc);
+ vcpu_schedule_unlock_irq(lock, vc);
+
SCHED_STAT_CRANK(vcpu_insert);
}
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 6a341b1..1086399 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -622,16 +622,19 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
{
struct rt_vcpu *svc = rt_vcpu(vc);
s_time_t now = NOW();
+ spinlock_t *lock;
/* not addlocate idle vcpu to dom vcpu list */
if ( is_idle_vcpu(vc) )
return;
+ lock = vcpu_schedule_lock_irq(vc);
if ( now >= svc->cur_deadline )
rt_update_deadline(now, svc);
if ( !__vcpu_on_q(svc) && vcpu_runnable(vc) && !vc->is_running )
__runq_insert(ops, svc);
+ vcpu_schedule_unlock_irq(lock, vc);
/* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index c5f640f..9aa209d 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1488,9 +1488,7 @@ void __init scheduler_init(void)
int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
{
- unsigned long flags;
struct vcpu *idle;
- spinlock_t *lock;
void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
struct scheduler *old_ops = per_cpu(scheduler, cpu);
struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
@@ -1509,8 +1507,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
return -ENOMEM;
}
- lock = pcpu_schedule_lock_irqsave(cpu, &flags);
-
SCHED_OP(old_ops, tick_suspend, cpu);
vpriv_old = idle->sched_priv;
idle->sched_priv = vpriv;
@@ -1520,8 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
SCHED_OP(new_ops, tick_resume, cpu);
SCHED_OP(new_ops, insert_vcpu, idle);
- pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
-
SCHED_OP(old_ops, free_vdata, vpriv_old);
SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
2015-10-08 12:52 ` [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
@ 2015-10-08 13:18 ` Andrew Cooper
2015-10-08 15:16 ` George Dunlap
1 sibling, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 13:18 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Meng Xu, Jan Beulich
On 08/10/15 13:52, Dario Faggioli wrote:
> The insert_vcpu scheduler hook is called with an inconsistent
> locking strategy. In fact, it is sometimes invoked while
> holding the runqueue lock and sometimes when that is not the
> case.
>
> For instance, in case of schedule_cpu_switch() the lock is
> acquired in generic code. On the other hand, in case of
> sched_move_domain(), locking is left as a responsibility
> of the schedulers implementing the hook.
>
> This results in Credit1 and RTDS schedulers ending up (in
> case of sched_move_domain()) doing runqueue manipulation
> without holding any runqueue lock, which is a bug. (Credit2
> was doing the locking by itself already.)
>
> The right thing is to defer locking to the specific schedulers,
> as it's them that know what, how and when it is best to lock
> (as in: runqueue locks, vs. private scheduler locks, vs. both,
> etc.).
>
> This patch, therefore:
> - removes any locking around insert_vcpu() from generic
> code;
> - add proper locking in the hook implementations, for
> both Credit1 and RTDS.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
2015-10-08 12:52 ` [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
2015-10-08 13:18 ` Andrew Cooper
@ 2015-10-08 15:16 ` George Dunlap
2015-10-08 15:49 ` Dario Faggioli
1 sibling, 1 reply; 29+ messages in thread
From: George Dunlap @ 2015-10-08 15:16 UTC (permalink / raw)
To: Dario Faggioli, xen-devel
Cc: George Dunlap, Andrew Cooper, Meng Xu, Jan Beulich
On 08/10/15 13:52, Dario Faggioli wrote:
> The insert_vcpu scheduler hook is called with an inconsistent
> locking strategy. In fact, it is sometimes invoked while
> holding the runqueue lock and sometimes when that is not the
> case.
>
> For instance, in case of schedule_cpu_switch() the lock is
> acquired in generic code. On the other hand, in case of
> sched_move_domain(), locking is left as a responsibility
> of the schedulers implementing the hook.
>
> This results in Credit1 and RTDS schedulers ending up (in
> case of sched_move_domain()) doing runqueue manipulation
> without holding any runqueue lock, which is a bug. (Credit2
> was doing the locking by itself already.)
>
> The right thing is to defer locking to the specific schedulers,
> as it's them that know what, how and when it is best to lock
> (as in: runqueue locks, vs. private scheduler locks, vs. both,
> etc.).
>
> This patch, therefore:
> - removes any locking around insert_vcpu() from generic
> code;
> - add proper locking in the hook implementations, for
> both Credit1 and RTDS.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Meng Xu <mengxu@cis.upenn.edu>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> ---
> Changes from v1 (within the other series):
> * split the patch (wrt the original patch, in the original
> series), and take care, in this one, only of insert_vcpu();
> ---
> xen/common/sched_credit.c | 5 +++++
> xen/common/sched_rt.c | 3 +++
> xen/common/schedule.c | 6 ------
> 3 files changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index 6f71e0d..fccb368 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -903,10 +903,15 @@ static void
> csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> {
> struct csched_vcpu *svc = vc->sched_priv;
> + spinlock_t *lock;
> +
> + lock = vcpu_schedule_lock_irq(vc);
>
> if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
> __runq_insert(vc->processor, svc);
>
> + vcpu_schedule_unlock_irq(lock, vc);
> +
> SCHED_STAT_CRANK(vcpu_insert);
> }
>
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 6a341b1..1086399 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -622,16 +622,19 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> {
> struct rt_vcpu *svc = rt_vcpu(vc);
> s_time_t now = NOW();
> + spinlock_t *lock;
>
> /* not addlocate idle vcpu to dom vcpu list */
> if ( is_idle_vcpu(vc) )
> return;
>
> + lock = vcpu_schedule_lock_irq(vc);
> if ( now >= svc->cur_deadline )
> rt_update_deadline(now, svc);
>
> if ( !__vcpu_on_q(svc) && vcpu_runnable(vc) && !vc->is_running )
> __runq_insert(ops, svc);
> + vcpu_schedule_unlock_irq(lock, vc);
>
> /* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
> list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index c5f640f..9aa209d 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -1488,9 +1488,7 @@ void __init scheduler_init(void)
>
> int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> {
> - unsigned long flags;
> struct vcpu *idle;
> - spinlock_t *lock;
> void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
> struct scheduler *old_ops = per_cpu(scheduler, cpu);
> struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
> @@ -1509,8 +1507,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> return -ENOMEM;
> }
>
> - lock = pcpu_schedule_lock_irqsave(cpu, &flags);
> -
> SCHED_OP(old_ops, tick_suspend, cpu);
> vpriv_old = idle->sched_priv;
> idle->sched_priv = vpriv;
> @@ -1520,8 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> SCHED_OP(new_ops, tick_resume, cpu);
> SCHED_OP(new_ops, insert_vcpu, idle);
>
> - pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
It seems to me that the locking here wasn't to protect insert_vcpu, but
to prevent any scheduling events from happening on cpu until all the
expected infrastructure (ticks, idle vcpu, &c) were ready. I can't
immediately convince myself that removing these is safe in that regard.
Can you address this?
-George
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
2015-10-08 15:16 ` George Dunlap
@ 2015-10-08 15:49 ` Dario Faggioli
2015-10-08 20:12 ` Dario Faggioli
0 siblings, 1 reply; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 15:49 UTC (permalink / raw)
To: George Dunlap, xen-devel
Cc: George Dunlap, Andrew Cooper, Meng Xu, Jan Beulich
[-- Attachment #1.1: Type: text/plain, Size: 2589 bytes --]
On Thu, 2015-10-08 at 16:16 +0100, George Dunlap wrote:
> On 08/10/15 13:52, Dario Faggioli wrote:
> > diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> > index c5f640f..9aa209d 100644
> > --- a/xen/common/schedule.c
> > +++ b/xen/common/schedule.c
> > @@ -1488,9 +1488,7 @@ void __init scheduler_init(void)
> >
> > int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> > {
> > - unsigned long flags;
> > struct vcpu *idle;
> > - spinlock_t *lock;
> > void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
> > struct scheduler *old_ops = per_cpu(scheduler, cpu);
> > struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
> > @@ -1509,8 +1507,6 @@ int schedule_cpu_switch(unsigned int cpu,
> > struct cpupool *c)
> > return -ENOMEM;
> > }
> >
> > - lock = pcpu_schedule_lock_irqsave(cpu, &flags);
> > -
> > SCHED_OP(old_ops, tick_suspend, cpu);
> > vpriv_old = idle->sched_priv;
> > idle->sched_priv = vpriv;
> > @@ -1520,8 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu,
> > struct cpupool *c)
> > SCHED_OP(new_ops, tick_resume, cpu);
> > SCHED_OP(new_ops, insert_vcpu, idle);
> >
> > - pcpu_schedule_unlock_irqrestore(lock, flags, cpu);
>
> It seems to me that the locking here wasn't to protect insert_vcpu,
> but
> to prevent any scheduling events from happening on cpu until all the
> expected infrastructure (ticks, idle vcpu, &c) were ready. I can't
> immediately convince myself that removing these is safe in that
> regard.
> Can you address this?
>
Scheduling can't happen on the cpu, until later than the end of this
function, when, in cpupool_assign_cpu_locked(), we set to 1 its
corresponding bit in the target cpupool's cpu_valid mask.
In fact, scheduling events happening before that, would basically mean
that a cpu outside of any cpupool is somehow being considered for
scheduling, which, as said, would be a bug. In fact, I sent patches
back in July to cure occurrences of that behavior.
We've been discussing, basically about the same issue, with Jan in
here:
https://www.choon.net/forum/read.php?22,3817262,3817489
And I'll add the promised ASSERT() and comment, when sending v2 of that
patch. :-)
Make sense?
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS
2015-10-08 15:49 ` Dario Faggioli
@ 2015-10-08 20:12 ` Dario Faggioli
0 siblings, 0 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 20:12 UTC (permalink / raw)
To: George Dunlap, xen-devel
Cc: George Dunlap, Andrew Cooper, Meng Xu, Jan Beulich
[-- Attachment #1.1: Type: text/plain, Size: 1389 bytes --]
On Thu, 2015-10-08 at 17:49 +0200, Dario Faggioli wrote:
> On Thu, 2015-10-08 at 16:16 +0100, George Dunlap wrote:
> > It seems to me that the locking here wasn't to protect insert_vcpu,
> > but
> > to prevent any scheduling events from happening on cpu until all
> > the
> > expected infrastructure (ticks, idle vcpu, &c) were ready. I can't
> > immediately convince myself that removing these is safe in that
> > regard.
> > Can you address this?
> >
> Scheduling can't happen on the cpu, until later than the end of this
> function, when, in cpupool_assign_cpu_locked(), we set to 1 its
> corresponding bit in the target cpupool's cpu_valid mask.
> We've been discussing, basically about the same issue, with Jan in
> here:
> https://www.choon.net/forum/read.php?22,3817262,3817489
>
> And I'll add the promised ASSERT() and comment, when sending v2 of
> that
> patch. :-)
>
Or, of course, I can put them down when doing v2 of _this_ patch, if
that makes it easier to understand why what I'm doing here is safe.
Yes, I think I'll do it here, I like it better that way.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
2015-10-08 12:52 ` [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1 Dario Faggioli
2015-10-08 12:52 ` [PATCH 2/7] xen: sched: fix locking for insert_vcpu() in credit1 and RTDS Dario Faggioli
@ 2015-10-08 12:52 ` Dario Faggioli
2015-10-08 15:27 ` George Dunlap
2015-10-09 5:31 ` Juergen Gross
2015-10-08 12:52 ` [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
` (3 subsequent siblings)
6 siblings, 2 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:52 UTC (permalink / raw)
To: xen-devel; +Cc: George Dunlap, Juergen Gross
Idle vCPUs should never really be explicitly inserted
in any of the schedulers' runqueue. In fact, they are
just put in execution immediately (either on boot, or
when a pCPU is assigned to a cpupool), and it will be
the first scheduling decision that --by preempting
them-- will 'park' them in the queue.
In fact, avoiding inserting those vCPUs in runqueues
is something that Credit2 and RTDS intercept and
forbid already (when implementing insert_vcpu()).
Credit1 does that implicitly, as the idle vCPU will
always be already running when insert_vcpu() is called,
and hence not put in the runqueue.
Let's make it *always* explicit, as that simplifies
things by quite a bit. For instance, we can now
introduce some BUG_ON() guards, with the twofold
objective of making this assumption clear, and of
catching misuse and bugs.
The check for whether we'd be inserting an idle vCPU
in a queue, now, happens, once and for all schedulers,
in generic code, at vCPU initialization time, while
we can just avoid trying (and always failing!) doing
so when doing cpupools manipulations.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
xen/common/sched_credit.c | 2 ++
xen/common/sched_credit2.c | 25 ++++++++++---------------
xen/common/sched_rt.c | 4 +---
xen/common/schedule.c | 21 +++++++++++----------
4 files changed, 24 insertions(+), 28 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index fccb368..fc447a7 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -905,6 +905,8 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
struct csched_vcpu *svc = vc->sched_priv;
spinlock_t *lock;
+ BUG_ON( is_idle_vcpu(vc) );
+
lock = vcpu_schedule_lock_irq(vc);
if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 912e1a2..234f798 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -870,30 +870,25 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
{
struct csched2_vcpu *svc = vc->sched_priv;
struct csched2_dom * const sdom = svc->sdom;
+ spinlock_t *lock;
printk("%s: Inserting %pv\n", __func__, vc);
- /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
- * been called for that cpu.
- */
- if ( ! is_idle_vcpu(vc) )
- {
- spinlock_t *lock;
+ BUG_ON(is_idle_vcpu(vc));
- /* FIXME: Do we need the private lock here? */
- list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
+ /* FIXME: Do we need the private lock here? */
+ list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
- /* Add vcpu to runqueue of initial processor */
- lock = vcpu_schedule_lock_irq(vc);
+ /* Add vcpu to runqueue of initial processor */
+ lock = vcpu_schedule_lock_irq(vc);
- runq_assign(ops, vc);
+ runq_assign(ops, vc);
- vcpu_schedule_unlock_irq(lock, vc);
+ vcpu_schedule_unlock_irq(lock, vc);
- sdom->nr_vcpus++;
+ sdom->nr_vcpus++;
- SCHED_STAT_CRANK(vcpu_insert);
- }
+ SCHED_STAT_CRANK(vcpu_insert);
CSCHED2_VCPU_CHECK(vc);
}
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 1086399..37a32a4 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -624,9 +624,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
s_time_t now = NOW();
spinlock_t *lock;
- /* not addlocate idle vcpu to dom vcpu list */
- if ( is_idle_vcpu(vc) )
- return;
+ BUG_ON( is_idle_vcpu(vc) );
lock = vcpu_schedule_lock_irq(vc);
if ( now >= svc->cur_deadline )
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 9aa209d..55daf73 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
init_timer(&v->poll_timer, poll_timer_fn,
v, v->processor);
- /* Idle VCPUs are scheduled immediately. */
+ v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
+ if ( v->sched_priv == NULL )
+ return 1;
+
+ TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
+
+ /* Idle VCPUs are scheduled immediately, not inserting them in runqueue. */
if ( is_idle_domain(d) )
{
per_cpu(schedule_data, v->processor).curr = v;
v->is_running = 1;
}
-
- TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
-
- v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
- if ( v->sched_priv == NULL )
- return 1;
-
- SCHED_OP(DOM2OP(d), insert_vcpu, v);
+ else
+ {
+ SCHED_OP(DOM2OP(d), insert_vcpu, v);
+ }
return 0;
}
@@ -1514,7 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
per_cpu(schedule_data, cpu).sched_priv = ppriv;
SCHED_OP(new_ops, tick_resume, cpu);
- SCHED_OP(new_ops, insert_vcpu, idle);
SCHED_OP(old_ops, free_vdata, vpriv_old);
SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues
2015-10-08 12:52 ` [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
@ 2015-10-08 15:27 ` George Dunlap
2015-10-08 15:39 ` Dario Faggioli
2015-10-09 5:31 ` Juergen Gross
1 sibling, 1 reply; 29+ messages in thread
From: George Dunlap @ 2015-10-08 15:27 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Juergen Gross
On 08/10/15 13:52, Dario Faggioli wrote:
> Idle vCPUs should never really be explicitly inserted
> in any of the schedulers' runqueue. In fact, they are
> just put in execution immediately (either on boot, or
> when a pCPU is assigned to a cpupool), and it will be
> the first scheduling decision that --by preempting
> them-- will 'park' them in the queue.
>
> In fact, avoiding inserting those vCPUs in runqueues
> is something that Credit2 and RTDS intercept and
> forbid already (when implementing insert_vcpu()).
> Credit1 does that implicitly, as the idle vCPU will
> always be already running when insert_vcpu() is called,
> and hence not put in the runqueue.
>
> Let's make it *always* explicit, as that simplifies
> things by quite a bit. For instance, we can now
> introduce some BUG_ON() guards, with the twofold
> objective of making this assumption clear, and of
> catching misuse and bugs.
>
> The check for whether we'd be inserting an idle vCPU
> in a queue, now, happens, once and for all schedulers,
> in generic code, at vCPU initialization time, while
> we can just avoid trying (and always failing!) doing
> so when doing cpupools manipulations.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Although it might be easier to evaluate the previous patch if this one
were moved behind it in the series:
Acked-by: George Dunlap <george.dunlap@citrix.com>
-George
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues
2015-10-08 15:27 ` George Dunlap
@ 2015-10-08 15:39 ` Dario Faggioli
0 siblings, 0 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 15:39 UTC (permalink / raw)
To: George Dunlap, xen-devel; +Cc: George Dunlap, Juergen Gross
[-- Attachment #1.1: Type: text/plain, Size: 1111 bytes --]
On Thu, 2015-10-08 at 16:27 +0100, George Dunlap wrote:
> On 08/10/15 13:52, Dario Faggioli wrote:
> > [...]
> > The check for whether we'd be inserting an idle vCPU
> > in a queue, now, happens, once and for all schedulers,
> > in generic code, at vCPU initialization time, while
> > we can just avoid trying (and always failing!) doing
> > so when doing cpupools manipulations.
> >
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
> Although it might be easier to evaluate the previous patch if this
> one
> were moved behind it in the series:
>
Indeed. I only did it like this because, in case we want to backport
patch 1 and 2, we would not need to backport this one (which is more an
improvement than a bugfix) too.
> Acked-by: George Dunlap <george.dunlap@citrix.com>
>
Thanks. :-)
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues
2015-10-08 12:52 ` [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
2015-10-08 15:27 ` George Dunlap
@ 2015-10-09 5:31 ` Juergen Gross
1 sibling, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2015-10-09 5:31 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap
On 10/08/2015 02:52 PM, Dario Faggioli wrote:
> Idle vCPUs should never really be explicitly inserted
> in any of the schedulers' runqueue. In fact, they are
> just put in execution immediately (either on boot, or
> when a pCPU is assigned to a cpupool), and it will be
> the first scheduling decision that --by preempting
> them-- will 'park' them in the queue.
>
> In fact, avoiding inserting those vCPUs in runqueues
> is something that Credit2 and RTDS intercept and
> forbid already (when implementing insert_vcpu()).
> Credit1 does that implicitly, as the idle vCPU will
> always be already running when insert_vcpu() is called,
> and hence not put in the runqueue.
>
> Let's make it *always* explicit, as that simplifies
> things by quite a bit. For instance, we can now
> introduce some BUG_ON() guards, with the twofold
> objective of making this assumption clear, and of
> catching misuse and bugs.
>
> The check for whether we'd be inserting an idle vCPU
> in a queue, now, happens, once and for all schedulers,
> in generic code, at vCPU initialization time, while
> we can just avoid trying (and always failing!) doing
> so when doing cpupools manipulations.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> xen/common/sched_credit.c | 2 ++
> xen/common/sched_credit2.c | 25 ++++++++++---------------
> xen/common/sched_rt.c | 4 +---
> xen/common/schedule.c | 21 +++++++++++----------
> 4 files changed, 24 insertions(+), 28 deletions(-)
>
> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> index fccb368..fc447a7 100644
> --- a/xen/common/sched_credit.c
> +++ b/xen/common/sched_credit.c
> @@ -905,6 +905,8 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> struct csched_vcpu *svc = vc->sched_priv;
> spinlock_t *lock;
>
> + BUG_ON( is_idle_vcpu(vc) );
> +
> lock = vcpu_schedule_lock_irq(vc);
>
> if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 912e1a2..234f798 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -870,30 +870,25 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> {
> struct csched2_vcpu *svc = vc->sched_priv;
> struct csched2_dom * const sdom = svc->sdom;
> + spinlock_t *lock;
>
> printk("%s: Inserting %pv\n", __func__, vc);
>
> - /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
> - * been called for that cpu.
> - */
> - if ( ! is_idle_vcpu(vc) )
> - {
> - spinlock_t *lock;
> + BUG_ON(is_idle_vcpu(vc));
>
> - /* FIXME: Do we need the private lock here? */
> - list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
> + /* FIXME: Do we need the private lock here? */
> + list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
>
> - /* Add vcpu to runqueue of initial processor */
> - lock = vcpu_schedule_lock_irq(vc);
> + /* Add vcpu to runqueue of initial processor */
> + lock = vcpu_schedule_lock_irq(vc);
>
> - runq_assign(ops, vc);
> + runq_assign(ops, vc);
>
> - vcpu_schedule_unlock_irq(lock, vc);
> + vcpu_schedule_unlock_irq(lock, vc);
>
> - sdom->nr_vcpus++;
> + sdom->nr_vcpus++;
>
> - SCHED_STAT_CRANK(vcpu_insert);
> - }
> + SCHED_STAT_CRANK(vcpu_insert);
>
> CSCHED2_VCPU_CHECK(vc);
> }
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 1086399..37a32a4 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -624,9 +624,7 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> s_time_t now = NOW();
> spinlock_t *lock;
>
> - /* not addlocate idle vcpu to dom vcpu list */
> - if ( is_idle_vcpu(vc) )
> - return;
> + BUG_ON( is_idle_vcpu(vc) );
>
> lock = vcpu_schedule_lock_irq(vc);
> if ( now >= svc->cur_deadline )
> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
> index 9aa209d..55daf73 100644
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -240,20 +240,22 @@ int sched_init_vcpu(struct vcpu *v, unsigned int processor)
> init_timer(&v->poll_timer, poll_timer_fn,
> v, v->processor);
>
> - /* Idle VCPUs are scheduled immediately. */
> + v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
> + if ( v->sched_priv == NULL )
> + return 1;
> +
> + TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
> +
> + /* Idle VCPUs are scheduled immediately, not inserting them in runqueue. */
> if ( is_idle_domain(d) )
> {
> per_cpu(schedule_data, v->processor).curr = v;
> v->is_running = 1;
> }
> -
> - TRACE_2D(TRC_SCHED_DOM_ADD, v->domain->domain_id, v->vcpu_id);
> -
> - v->sched_priv = SCHED_OP(DOM2OP(d), alloc_vdata, v, d->sched_priv);
> - if ( v->sched_priv == NULL )
> - return 1;
> -
> - SCHED_OP(DOM2OP(d), insert_vcpu, v);
> + else
> + {
> + SCHED_OP(DOM2OP(d), insert_vcpu, v);
> + }
>
> return 0;
> }
> @@ -1514,7 +1516,6 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
> ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
> per_cpu(schedule_data, cpu).sched_priv = ppriv;
> SCHED_OP(new_ops, tick_resume, cpu);
> - SCHED_OP(new_ops, insert_vcpu, idle);
>
> SCHED_OP(old_ops, free_vdata, vpriv_old);
> SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
` (2 preceding siblings ...)
2015-10-08 12:52 ` [PATCH 3/7] xen: sched: better handle (not) inserting idle vCPUs in runqueues Dario Faggioli
@ 2015-10-08 12:52 ` Dario Faggioli
2015-10-08 13:47 ` Andrew Cooper
2015-10-08 15:31 ` George Dunlap
2015-10-08 12:53 ` [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
` (2 subsequent siblings)
6 siblings, 2 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:52 UTC (permalink / raw)
To: xen-devel; +Cc: George Dunlap, Meng Xu
As, curently, there is no reason for bothering having
it and keeping it updated.
In fact, it is only used for dumping and changing
vCPUs parameters, but that can be achieved easily with
for_each_vcpu.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Meng Xu <mengxu@cis.upenn.edu>
---
xen/common/sched_rt.c | 36 +++++++++++++-----------------------
1 file changed, 13 insertions(+), 23 deletions(-)
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 37a32a4..797adc1 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -160,7 +160,6 @@ struct rt_private {
*/
struct rt_vcpu {
struct list_head q_elem; /* on the runq/depletedq list */
- struct list_head sdom_elem; /* on the domain VCPU list */
/* Up-pointers */
struct rt_dom *sdom;
@@ -182,7 +181,6 @@ struct rt_vcpu {
* Domain
*/
struct rt_dom {
- struct list_head vcpu; /* link its VCPUs */
struct list_head sdom_elem; /* link list on rt_priv */
struct domain *dom; /* pointer to upper domain */
};
@@ -290,7 +288,7 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu)
static void
rt_dump(const struct scheduler *ops)
{
- struct list_head *iter_sdom, *iter_svc, *runq, *depletedq, *iter;
+ struct list_head *runq, *depletedq, *iter;
struct rt_private *prv = rt_priv(ops);
struct rt_vcpu *svc;
struct rt_dom *sdom;
@@ -319,14 +317,16 @@ rt_dump(const struct scheduler *ops)
}
printk("Domain info:\n");
- list_for_each( iter_sdom, &prv->sdom )
+ list_for_each( iter, &prv->sdom )
{
- sdom = list_entry(iter_sdom, struct rt_dom, sdom_elem);
+ struct vcpu *vc;
+
+ sdom = list_entry(iter, struct rt_dom, sdom_elem);
printk("\tdomain: %d\n", sdom->dom->domain_id);
- list_for_each( iter_svc, &sdom->vcpu )
+ for_each_vcpu( sdom->dom, vc )
{
- svc = list_entry(iter_svc, struct rt_vcpu, sdom_elem);
+ svc = rt_vcpu(vc);
rt_dump_vcpu(ops, svc);
}
}
@@ -527,7 +527,6 @@ rt_alloc_domdata(const struct scheduler *ops, struct domain *dom)
if ( sdom == NULL )
return NULL;
- INIT_LIST_HEAD(&sdom->vcpu);
INIT_LIST_HEAD(&sdom->sdom_elem);
sdom->dom = dom;
@@ -587,7 +586,6 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
return NULL;
INIT_LIST_HEAD(&svc->q_elem);
- INIT_LIST_HEAD(&svc->sdom_elem);
svc->flags = 0U;
svc->sdom = dd;
svc->vcpu = vc;
@@ -614,8 +612,7 @@ rt_free_vdata(const struct scheduler *ops, void *priv)
* This function is called in sched_move_domain() in schedule.c
* When move a domain to a new cpupool.
* It inserts vcpus of moving domain to the scheduler's RunQ in
- * dest. cpupool; and insert rt_vcpu svc to scheduler-specific
- * vcpu list of the dom
+ * dest. cpupool.
*/
static void
rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
@@ -634,15 +631,11 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
__runq_insert(ops, svc);
vcpu_schedule_unlock_irq(lock, vc);
- /* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
- list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
-
SCHED_STAT_CRANK(vcpu_insert);
}
/*
- * Remove rt_vcpu svc from the old scheduler in source cpupool; and
- * Remove rt_vcpu svc from scheduler-specific vcpu list of the dom
+ * Remove rt_vcpu svc from the old scheduler in source cpupool.
*/
static void
rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
@@ -659,9 +652,6 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
if ( __vcpu_on_q(svc) )
__q_remove(svc);
vcpu_schedule_unlock_irq(lock, vc);
-
- if ( !is_idle_vcpu(vc) )
- list_del_init(&svc->sdom_elem);
}
/*
@@ -1137,7 +1127,7 @@ rt_dom_cntl(
struct rt_private *prv = rt_priv(ops);
struct rt_dom * const sdom = rt_dom(d);
struct rt_vcpu *svc;
- struct list_head *iter;
+ struct vcpu *vc;
unsigned long flags;
int rc = 0;
@@ -1145,7 +1135,7 @@ rt_dom_cntl(
{
case XEN_DOMCTL_SCHEDOP_getinfo:
spin_lock_irqsave(&prv->lock, flags);
- svc = list_entry(sdom->vcpu.next, struct rt_vcpu, sdom_elem);
+ svc = rt_vcpu(sdom->dom->vcpu[0]);
op->u.rtds.period = svc->period / MICROSECS(1); /* transfer to us */
op->u.rtds.budget = svc->budget / MICROSECS(1);
spin_unlock_irqrestore(&prv->lock, flags);
@@ -1157,9 +1147,9 @@ rt_dom_cntl(
break;
}
spin_lock_irqsave(&prv->lock, flags);
- list_for_each( iter, &sdom->vcpu )
+ for_each_vcpu( sdom->dom, vc )
{
- struct rt_vcpu * svc = list_entry(iter, struct rt_vcpu, sdom_elem);
+ svc = rt_vcpu(vc);
svc->period = MICROSECS(op->u.rtds.period); /* transfer to nanosec */
svc->budget = MICROSECS(op->u.rtds.budget);
}
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS
2015-10-08 12:52 ` [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
@ 2015-10-08 13:47 ` Andrew Cooper
2015-10-08 15:31 ` George Dunlap
1 sibling, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 13:47 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Meng Xu
On 08/10/15 13:52, Dario Faggioli wrote:
> @@ -319,14 +317,16 @@ rt_dump(const struct scheduler *ops)
> }
>
> printk("Domain info:\n");
> - list_for_each( iter_sdom, &prv->sdom )
> + list_for_each( iter, &prv->sdom )
> {
> - sdom = list_entry(iter_sdom, struct rt_dom, sdom_elem);
> + struct vcpu *vc;
> +
> + sdom = list_entry(iter, struct rt_dom, sdom_elem);
> printk("\tdomain: %d\n", sdom->dom->domain_id);
>
> - list_for_each( iter_svc, &sdom->vcpu )
> + for_each_vcpu( sdom->dom, vc )
Space before bracket, as you are already changing the line.
> {
> - svc = list_entry(iter_svc, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(vc);
> rt_dump_vcpu(ops, svc);
> }
> }
>
> @@ -1145,7 +1135,7 @@ rt_dom_cntl(
> {
> case XEN_DOMCTL_SCHEDOP_getinfo:
> spin_lock_irqsave(&prv->lock, flags);
> - svc = list_entry(sdom->vcpu.next, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(sdom->dom->vcpu[0]);
This change swaps one potential bad pointer for another.
In the former case, there was no guarantee that sdom->vcpu had any
entries in it, potentially making svc a wild pointer.
In the latter case, there is no guarantee that dom->vcpu has been
allocated yet. You must check d->max_vcpus > 0 before dereferencing
d->vcpu[].
~Andrew
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS
2015-10-08 12:52 ` [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
2015-10-08 13:47 ` Andrew Cooper
@ 2015-10-08 15:31 ` George Dunlap
1 sibling, 0 replies; 29+ messages in thread
From: George Dunlap @ 2015-10-08 15:31 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap, Meng Xu
On 08/10/15 13:52, Dario Faggioli wrote:
> As, curently, there is no reason for bothering having
> it and keeping it updated.
>
> In fact, it is only used for dumping and changing
> vCPUs parameters, but that can be achieved easily with
> for_each_vcpu.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
With Andrew's comments addressed:
Acked-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Meng Xu <mengxu@cis.upenn.edu>
> ---
> xen/common/sched_rt.c | 36 +++++++++++++-----------------------
> 1 file changed, 13 insertions(+), 23 deletions(-)
>
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 37a32a4..797adc1 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -160,7 +160,6 @@ struct rt_private {
> */
> struct rt_vcpu {
> struct list_head q_elem; /* on the runq/depletedq list */
> - struct list_head sdom_elem; /* on the domain VCPU list */
>
> /* Up-pointers */
> struct rt_dom *sdom;
> @@ -182,7 +181,6 @@ struct rt_vcpu {
> * Domain
> */
> struct rt_dom {
> - struct list_head vcpu; /* link its VCPUs */
> struct list_head sdom_elem; /* link list on rt_priv */
> struct domain *dom; /* pointer to upper domain */
> };
> @@ -290,7 +288,7 @@ rt_dump_pcpu(const struct scheduler *ops, int cpu)
> static void
> rt_dump(const struct scheduler *ops)
> {
> - struct list_head *iter_sdom, *iter_svc, *runq, *depletedq, *iter;
> + struct list_head *runq, *depletedq, *iter;
> struct rt_private *prv = rt_priv(ops);
> struct rt_vcpu *svc;
> struct rt_dom *sdom;
> @@ -319,14 +317,16 @@ rt_dump(const struct scheduler *ops)
> }
>
> printk("Domain info:\n");
> - list_for_each( iter_sdom, &prv->sdom )
> + list_for_each( iter, &prv->sdom )
> {
> - sdom = list_entry(iter_sdom, struct rt_dom, sdom_elem);
> + struct vcpu *vc;
> +
> + sdom = list_entry(iter, struct rt_dom, sdom_elem);
> printk("\tdomain: %d\n", sdom->dom->domain_id);
>
> - list_for_each( iter_svc, &sdom->vcpu )
> + for_each_vcpu( sdom->dom, vc )
> {
> - svc = list_entry(iter_svc, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(vc);
> rt_dump_vcpu(ops, svc);
> }
> }
> @@ -527,7 +527,6 @@ rt_alloc_domdata(const struct scheduler *ops, struct domain *dom)
> if ( sdom == NULL )
> return NULL;
>
> - INIT_LIST_HEAD(&sdom->vcpu);
> INIT_LIST_HEAD(&sdom->sdom_elem);
> sdom->dom = dom;
>
> @@ -587,7 +586,6 @@ rt_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
> return NULL;
>
> INIT_LIST_HEAD(&svc->q_elem);
> - INIT_LIST_HEAD(&svc->sdom_elem);
> svc->flags = 0U;
> svc->sdom = dd;
> svc->vcpu = vc;
> @@ -614,8 +612,7 @@ rt_free_vdata(const struct scheduler *ops, void *priv)
> * This function is called in sched_move_domain() in schedule.c
> * When move a domain to a new cpupool.
> * It inserts vcpus of moving domain to the scheduler's RunQ in
> - * dest. cpupool; and insert rt_vcpu svc to scheduler-specific
> - * vcpu list of the dom
> + * dest. cpupool.
> */
> static void
> rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> @@ -634,15 +631,11 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
> __runq_insert(ops, svc);
> vcpu_schedule_unlock_irq(lock, vc);
>
> - /* add rt_vcpu svc to scheduler-specific vcpu list of the dom */
> - list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
> -
> SCHED_STAT_CRANK(vcpu_insert);
> }
>
> /*
> - * Remove rt_vcpu svc from the old scheduler in source cpupool; and
> - * Remove rt_vcpu svc from scheduler-specific vcpu list of the dom
> + * Remove rt_vcpu svc from the old scheduler in source cpupool.
> */
> static void
> rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
> @@ -659,9 +652,6 @@ rt_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
> if ( __vcpu_on_q(svc) )
> __q_remove(svc);
> vcpu_schedule_unlock_irq(lock, vc);
> -
> - if ( !is_idle_vcpu(vc) )
> - list_del_init(&svc->sdom_elem);
> }
>
> /*
> @@ -1137,7 +1127,7 @@ rt_dom_cntl(
> struct rt_private *prv = rt_priv(ops);
> struct rt_dom * const sdom = rt_dom(d);
> struct rt_vcpu *svc;
> - struct list_head *iter;
> + struct vcpu *vc;
> unsigned long flags;
> int rc = 0;
>
> @@ -1145,7 +1135,7 @@ rt_dom_cntl(
> {
> case XEN_DOMCTL_SCHEDOP_getinfo:
> spin_lock_irqsave(&prv->lock, flags);
> - svc = list_entry(sdom->vcpu.next, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(sdom->dom->vcpu[0]);
> op->u.rtds.period = svc->period / MICROSECS(1); /* transfer to us */
> op->u.rtds.budget = svc->budget / MICROSECS(1);
> spin_unlock_irqrestore(&prv->lock, flags);
> @@ -1157,9 +1147,9 @@ rt_dom_cntl(
> break;
> }
> spin_lock_irqsave(&prv->lock, flags);
> - list_for_each( iter, &sdom->vcpu )
> + for_each_vcpu( sdom->dom, vc )
> {
> - struct rt_vcpu * svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(vc);
> svc->period = MICROSECS(op->u.rtds.period); /* transfer to nanosec */
> svc->budget = MICROSECS(op->u.rtds.budget);
> }
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
` (3 preceding siblings ...)
2015-10-08 12:52 ` [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS Dario Faggioli
@ 2015-10-08 12:53 ` Dario Faggioli
2015-10-08 13:10 ` Andrew Cooper
` (2 more replies)
2015-10-08 12:53 ` [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump Dario Faggioli
2015-10-08 12:53 ` [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW() Dario Faggioli
6 siblings, 3 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:53 UTC (permalink / raw)
To: xen-devel; +Cc: George Dunlap
As, curently, there is no reason for bothering having
it and keeping it updated.
In fact, it is only used for dumping and changing
vCPUs parameters, but that can be achieved easily with
for_each_vcpu.
While there, improve alignment of comments, ad
add a const qualifier to a pointer, making things
more consistent with what happens everywhere else
in the source file.
This also allows us to kill one of the remaining
FIXMEs in the code, which is always good.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
xen/common/sched_credit2.c | 34 +++++++++++-----------------------
xen/common/sched_rt.c | 2 +-
2 files changed, 12 insertions(+), 24 deletions(-)
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 234f798..178a665 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -234,9 +234,8 @@ struct csched2_private {
* Virtual CPU
*/
struct csched2_vcpu {
- struct list_head rqd_elem; /* On the runqueue data list */
- struct list_head sdom_elem; /* On the domain vcpu list */
- struct list_head runq_elem; /* On the runqueue */
+ struct list_head rqd_elem; /* On the runqueue data list */
+ struct list_head runq_elem; /* On the runqueue */
struct csched2_runqueue_data *rqd; /* Up-pointer to the runqueue */
/* Up-pointers */
@@ -261,7 +260,6 @@ struct csched2_vcpu {
* Domain
*/
struct csched2_dom {
- struct list_head vcpu;
struct list_head sdom_elem;
struct domain *dom;
uint16_t weight;
@@ -772,7 +770,6 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
return NULL;
INIT_LIST_HEAD(&svc->rqd_elem);
- INIT_LIST_HEAD(&svc->sdom_elem);
INIT_LIST_HEAD(&svc->runq_elem);
svc->sdom = dd;
@@ -876,9 +873,6 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
BUG_ON(is_idle_vcpu(vc));
- /* FIXME: Do we need the private lock here? */
- list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
-
/* Add vcpu to runqueue of initial processor */
lock = vcpu_schedule_lock_irq(vc);
@@ -923,10 +917,6 @@ csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
vcpu_schedule_unlock_irq(lock, vc);
- /* Remove from sdom list. Don't need a lock for this, as it's called
- * syncronously when nothing else can happen. */
- list_del_init(&svc->sdom_elem);
-
svc->sdom->nr_vcpus--;
}
}
@@ -1443,7 +1433,7 @@ csched2_dom_cntl(
if ( op->u.credit2.weight != 0 )
{
- struct list_head *iter;
+ struct vcpu *vc;
int old_weight;
old_weight = sdom->weight;
@@ -1451,9 +1441,9 @@ csched2_dom_cntl(
sdom->weight = op->u.credit2.weight;
/* Update weights for vcpus, and max_weight for runqueues on which they reside */
- list_for_each ( iter, &sdom->vcpu )
+ for_each_vcpu ( d, vc )
{
- struct csched2_vcpu *svc = list_entry(iter, struct csched2_vcpu, sdom_elem);
+ struct csched2_vcpu *svc = CSCHED2_VCPU(vc);
/* NB: Locking order is important here. Because we grab this lock here, we
* must never lock csched2_priv.lock if we're holding a runqueue lock.
@@ -1487,7 +1477,6 @@ csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom)
return NULL;
/* Initialize credit and weight */
- INIT_LIST_HEAD(&sdom->vcpu);
INIT_LIST_HEAD(&sdom->sdom_elem);
sdom->dom = dom;
sdom->weight = CSCHED2_DEFAULT_WEIGHT;
@@ -1539,9 +1528,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
static void
csched2_dom_destroy(const struct scheduler *ops, struct domain *dom)
{
- struct csched2_dom *sdom = CSCHED2_DOM(dom);
-
- BUG_ON(!list_empty(&sdom->vcpu));
+ BUG_ON(CSCHED2_DOM(dom)->nr_vcpus != 0);
csched2_free_domdata(ops, CSCHED2_DOM(dom));
}
@@ -1881,7 +1868,7 @@ csched2_dump_pcpu(const struct scheduler *ops, int cpu)
static void
csched2_dump(const struct scheduler *ops)
{
- struct list_head *iter_sdom, *iter_svc;
+ struct list_head *iter_sdom;
struct csched2_private *prv = CSCHED2_PRIV(ops);
unsigned long flags;
int i, loop;
@@ -1926,6 +1913,8 @@ csched2_dump(const struct scheduler *ops)
list_for_each( iter_sdom, &prv->sdom )
{
struct csched2_dom *sdom;
+ struct vcpu *vc;
+
sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
printk("\tDomain: %d w %d v %d\n\t",
@@ -1933,12 +1922,11 @@ csched2_dump(const struct scheduler *ops)
sdom->weight,
sdom->nr_vcpus);
- list_for_each( iter_svc, &sdom->vcpu )
+ for_each_vcpu( sdom->dom, vc )
{
- struct csched2_vcpu *svc;
+ struct csched2_vcpu * const svc = CSCHED2_VCPU(vc);
spinlock_t *lock;
- svc = list_entry(iter_svc, struct csched2_vcpu, sdom_elem);
lock = vcpu_schedule_lock(svc->vcpu);
printk("\t%3d: ", ++loop);
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 797adc1..0ca4902 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -1147,7 +1147,7 @@ rt_dom_cntl(
break;
}
spin_lock_irqsave(&prv->lock, flags);
- for_each_vcpu( sdom->dom, vc )
+ for_each_vcpu( d, vc )
{
svc = rt_vcpu(vc);
svc->period = MICROSECS(op->u.rtds.period); /* transfer to nanosec */
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 12:53 ` [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
@ 2015-10-08 13:10 ` Andrew Cooper
2015-10-08 13:17 ` Dario Faggioli
2015-10-08 13:56 ` Andrew Cooper
2015-10-08 15:40 ` George Dunlap
2 siblings, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 13:10 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap
On 08/10/15 13:53, Dario Faggioli wrote:
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 797adc1..0ca4902 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -1147,7 +1147,7 @@ rt_dom_cntl(
> break;
> }
> spin_lock_irqsave(&prv->lock, flags);
> - for_each_vcpu( sdom->dom, vc )
> + for_each_vcpu( d, vc )
> {
> svc = rt_vcpu(vc);
> svc->period = MICROSECS(op->u.rtds.period); /* transfer to nanosec */
This hunk looks like it wants to be in the previous patch.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 13:10 ` Andrew Cooper
@ 2015-10-08 13:17 ` Dario Faggioli
0 siblings, 0 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 13:17 UTC (permalink / raw)
To: Andrew Cooper, xen-devel; +Cc: George Dunlap
[-- Attachment #1.1: Type: text/plain, Size: 1069 bytes --]
On Thu, 2015-10-08 at 14:10 +0100, Andrew Cooper wrote:
> On 08/10/15 13:53, Dario Faggioli wrote:
> > diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> > index 797adc1..0ca4902 100644
> > --- a/xen/common/sched_rt.c
> > +++ b/xen/common/sched_rt.c
> > @@ -1147,7 +1147,7 @@ rt_dom_cntl(
> > break;
> > }
> > spin_lock_irqsave(&prv->lock, flags);
> > - for_each_vcpu( sdom->dom, vc )
> > + for_each_vcpu( d, vc )
> > {
> > svc = rt_vcpu(vc);
> > svc->period = MICROSECS(op->u.rtds.period); /*
> > transfer to nanosec */
>
> This hunk looks like it wants to be in the previous patch.
>
It clearly does. `stg refresh' fat-fingering, I guess.
Sorry, I'll take care when respinning! :-/
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 12:53 ` [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
2015-10-08 13:10 ` Andrew Cooper
@ 2015-10-08 13:56 ` Andrew Cooper
2015-10-08 15:32 ` Dario Faggioli
2015-10-08 15:40 ` George Dunlap
2 siblings, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 13:56 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap
On 08/10/15 13:53, Dario Faggioli wrote:
> @@ -1443,7 +1433,7 @@ csched2_dom_cntl(
>
> if ( op->u.credit2.weight != 0 )
> {
> - struct list_head *iter;
> + struct vcpu *vc;
Any chance of starting to align on the more common practice of just v
for a vcpu?
> @@ -1539,9 +1528,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
> static void
> csched2_dom_destroy(const struct scheduler *ops, struct domain *dom)
> {
> - struct csched2_dom *sdom = CSCHED2_DOM(dom);
> -
> - BUG_ON(!list_empty(&sdom->vcpu));
> + BUG_ON(CSCHED2_DOM(dom)->nr_vcpus != 0);
This is a latent bug (excuse the pun) which can be triggered by
userspace. There is no guarantee or requirement that a domain
registered with a scheduler has vcpus.
~Andrew
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 13:56 ` Andrew Cooper
@ 2015-10-08 15:32 ` Dario Faggioli
2015-10-08 15:39 ` Andrew Cooper
0 siblings, 1 reply; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 15:32 UTC (permalink / raw)
To: Andrew Cooper, xen-devel; +Cc: George Dunlap
[-- Attachment #1.1: Type: text/plain, Size: 1845 bytes --]
On Thu, 2015-10-08 at 14:56 +0100, Andrew Cooper wrote:
> On 08/10/15 13:53, Dario Faggioli wrote:
> > @@ -1443,7 +1433,7 @@ csched2_dom_cntl(
> >
> > if ( op->u.credit2.weight != 0 )
> > {
> > - struct list_head *iter;
> > + struct vcpu *vc;
>
> Any chance of starting to align on the more common practice of just v
> for a vcpu?
>
I see. I'm a bit split, though. It's v in schedule.c, but it's
**always** (with only 2 exceptions) vc in sched_*.c.
I know that, if we want to align, we need to start from somewhere, but
OTOH, consistency is rather helpful when reading this code.
I'll think about it...
> > @@ -1539,9 +1528,7 @@ csched2_free_domdata(const struct scheduler
> > *ops, void *data)
> > static void
> > csched2_dom_destroy(const struct scheduler *ops, struct domain
> > *dom)
> > {
> > - struct csched2_dom *sdom = CSCHED2_DOM(dom);
> > -
> > - BUG_ON(!list_empty(&sdom->vcpu));
> > + BUG_ON(CSCHED2_DOM(dom)->nr_vcpus != 0);
>
> This is a latent bug (excuse the pun) which can be triggered by
> userspace. There is no guarantee or requirement that a domain
> registered with a scheduler has vcpus.
>
Mmm... I think the original check wanted to catch cases where a domain
(or, at least, its scheduling related bits) is being destroyed with
vcpus still on.
While trying to retain the BUG_ON, even with sdom->vcpu not being there
any longer, I probably went too far.
BUG_ON(CSCHED2_DOM(dom)->nr_vcpus > 0);
should be fine, I think.
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 15:32 ` Dario Faggioli
@ 2015-10-08 15:39 ` Andrew Cooper
0 siblings, 0 replies; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 15:39 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap
On 08/10/15 16:32, Dario Faggioli wrote:
> On Thu, 2015-10-08 at 14:56 +0100, Andrew Cooper wrote:
>> On 08/10/15 13:53, Dario Faggioli wrote:
>>> @@ -1443,7 +1433,7 @@ csched2_dom_cntl(
>>>
>>> if ( op->u.credit2.weight != 0 )
>>> {
>>> - struct list_head *iter;
>>> + struct vcpu *vc;
>> Any chance of starting to align on the more common practice of just v
>> for a vcpu?
>>
> I see. I'm a bit split, though. It's v in schedule.c, but it's
> **always** (with only 2 exceptions) vc in sched_*.c.
>
> I know that, if we want to align, we need to start from somewhere, but
> OTOH, consistency is rather helpful when reading this code.
>
> I'll think about it...
>
>>> @@ -1539,9 +1528,7 @@ csched2_free_domdata(const struct scheduler
>>> *ops, void *data)
>>> static void
>>> csched2_dom_destroy(const struct scheduler *ops, struct domain
>>> *dom)
>>> {
>>> - struct csched2_dom *sdom = CSCHED2_DOM(dom);
>>> -
>>> - BUG_ON(!list_empty(&sdom->vcpu));
>>> + BUG_ON(CSCHED2_DOM(dom)->nr_vcpus != 0);
>> This is a latent bug (excuse the pun) which can be triggered by
>> userspace. There is no guarantee or requirement that a domain
>> registered with a scheduler has vcpus.
>>
> Mmm... I think the original check wanted to catch cases where a domain
> (or, at least, its scheduling related bits) is being destroyed with
> vcpus still on.
Oops sorry - I got the condition inverted when thinking about it. Also,
this is a csched2_dom rather than a struct domain.
I rescind my query.
~Andrew
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2
2015-10-08 12:53 ` [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
2015-10-08 13:10 ` Andrew Cooper
2015-10-08 13:56 ` Andrew Cooper
@ 2015-10-08 15:40 ` George Dunlap
2 siblings, 0 replies; 29+ messages in thread
From: George Dunlap @ 2015-10-08 15:40 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap
On 08/10/15 13:53, Dario Faggioli wrote:
> As, curently, there is no reason for bothering having
> it and keeping it updated.
>
> In fact, it is only used for dumping and changing
> vCPUs parameters, but that can be achieved easily with
> for_each_vcpu.
>
> While there, improve alignment of comments, ad
> add a const qualifier to a pointer, making things
> more consistent with what happens everywhere else
> in the source file.
>
> This also allows us to kill one of the remaining
> FIXMEs in the code, which is always good.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> ---
> xen/common/sched_credit2.c | 34 +++++++++++-----------------------
> xen/common/sched_rt.c | 2 +-
> 2 files changed, 12 insertions(+), 24 deletions(-)
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 234f798..178a665 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -234,9 +234,8 @@ struct csched2_private {
> * Virtual CPU
> */
> struct csched2_vcpu {
> - struct list_head rqd_elem; /* On the runqueue data list */
> - struct list_head sdom_elem; /* On the domain vcpu list */
> - struct list_head runq_elem; /* On the runqueue */
> + struct list_head rqd_elem; /* On the runqueue data list */
> + struct list_head runq_elem; /* On the runqueue */
> struct csched2_runqueue_data *rqd; /* Up-pointer to the runqueue */
>
> /* Up-pointers */
> @@ -261,7 +260,6 @@ struct csched2_vcpu {
> * Domain
> */
> struct csched2_dom {
> - struct list_head vcpu;
> struct list_head sdom_elem;
> struct domain *dom;
> uint16_t weight;
> @@ -772,7 +770,6 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd)
> return NULL;
>
> INIT_LIST_HEAD(&svc->rqd_elem);
> - INIT_LIST_HEAD(&svc->sdom_elem);
> INIT_LIST_HEAD(&svc->runq_elem);
>
> svc->sdom = dd;
> @@ -876,9 +873,6 @@ csched2_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>
> BUG_ON(is_idle_vcpu(vc));
>
> - /* FIXME: Do we need the private lock here? */
> - list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
> -
> /* Add vcpu to runqueue of initial processor */
> lock = vcpu_schedule_lock_irq(vc);
>
> @@ -923,10 +917,6 @@ csched2_vcpu_remove(const struct scheduler *ops, struct vcpu *vc)
>
> vcpu_schedule_unlock_irq(lock, vc);
>
> - /* Remove from sdom list. Don't need a lock for this, as it's called
> - * syncronously when nothing else can happen. */
> - list_del_init(&svc->sdom_elem);
> -
> svc->sdom->nr_vcpus--;
> }
> }
> @@ -1443,7 +1433,7 @@ csched2_dom_cntl(
>
> if ( op->u.credit2.weight != 0 )
> {
> - struct list_head *iter;
> + struct vcpu *vc;
> int old_weight;
>
> old_weight = sdom->weight;
> @@ -1451,9 +1441,9 @@ csched2_dom_cntl(
> sdom->weight = op->u.credit2.weight;
>
> /* Update weights for vcpus, and max_weight for runqueues on which they reside */
> - list_for_each ( iter, &sdom->vcpu )
> + for_each_vcpu ( d, vc )
> {
> - struct csched2_vcpu *svc = list_entry(iter, struct csched2_vcpu, sdom_elem);
> + struct csched2_vcpu *svc = CSCHED2_VCPU(vc);
>
> /* NB: Locking order is important here. Because we grab this lock here, we
> * must never lock csched2_priv.lock if we're holding a runqueue lock.
> @@ -1487,7 +1477,6 @@ csched2_alloc_domdata(const struct scheduler *ops, struct domain *dom)
> return NULL;
>
> /* Initialize credit and weight */
> - INIT_LIST_HEAD(&sdom->vcpu);
> INIT_LIST_HEAD(&sdom->sdom_elem);
> sdom->dom = dom;
> sdom->weight = CSCHED2_DEFAULT_WEIGHT;
> @@ -1539,9 +1528,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
> static void
> csched2_dom_destroy(const struct scheduler *ops, struct domain *dom)
> {
> - struct csched2_dom *sdom = CSCHED2_DOM(dom);
> -
> - BUG_ON(!list_empty(&sdom->vcpu));
> + BUG_ON(CSCHED2_DOM(dom)->nr_vcpus != 0);
>
> csched2_free_domdata(ops, CSCHED2_DOM(dom));
> }
> @@ -1881,7 +1868,7 @@ csched2_dump_pcpu(const struct scheduler *ops, int cpu)
> static void
> csched2_dump(const struct scheduler *ops)
> {
> - struct list_head *iter_sdom, *iter_svc;
> + struct list_head *iter_sdom;
> struct csched2_private *prv = CSCHED2_PRIV(ops);
> unsigned long flags;
> int i, loop;
> @@ -1926,6 +1913,8 @@ csched2_dump(const struct scheduler *ops)
> list_for_each( iter_sdom, &prv->sdom )
> {
> struct csched2_dom *sdom;
> + struct vcpu *vc;
> +
> sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
>
> printk("\tDomain: %d w %d v %d\n\t",
> @@ -1933,12 +1922,11 @@ csched2_dump(const struct scheduler *ops)
> sdom->weight,
> sdom->nr_vcpus);
>
> - list_for_each( iter_svc, &sdom->vcpu )
> + for_each_vcpu( sdom->dom, vc )
> {
> - struct csched2_vcpu *svc;
> + struct csched2_vcpu * const svc = CSCHED2_VCPU(vc);
> spinlock_t *lock;
>
> - svc = list_entry(iter_svc, struct csched2_vcpu, sdom_elem);
> lock = vcpu_schedule_lock(svc->vcpu);
>
> printk("\t%3d: ", ++loop);
> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> index 797adc1..0ca4902 100644
> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -1147,7 +1147,7 @@ rt_dom_cntl(
> break;
> }
> spin_lock_irqsave(&prv->lock, flags);
> - for_each_vcpu( sdom->dom, vc )
> + for_each_vcpu( d, vc )
> {
> svc = rt_vcpu(vc);
> svc->period = MICROSECS(op->u.rtds.period); /* transfer to nanosec */
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
` (4 preceding siblings ...)
2015-10-08 12:53 ` [PATCH 5/7] xen: sched: get rid of the per domain vCPU list in Credit2 Dario Faggioli
@ 2015-10-08 12:53 ` Dario Faggioli
2015-10-08 15:42 ` George Dunlap
2015-10-08 12:53 ` [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW() Dario Faggioli
6 siblings, 1 reply; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:53 UTC (permalink / raw)
To: xen-devel; +Cc: Juergen Gross, George Dunlap
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
xen/common/sched_credit2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 178a665..9213d98 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -1917,7 +1917,7 @@ csched2_dump(const struct scheduler *ops)
sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
- printk("\tDomain: %d w %d v %d\n\t",
+ printk("\tDomain: %d w %d v %d\n",
sdom->dom->domain_id,
sdom->weight,
sdom->nr_vcpus);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump
2015-10-08 12:53 ` [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump Dario Faggioli
@ 2015-10-08 15:42 ` George Dunlap
2015-10-08 15:59 ` Dario Faggioli
0 siblings, 1 reply; 29+ messages in thread
From: George Dunlap @ 2015-10-08 15:42 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: Juergen Gross, George Dunlap
On 08/10/15 13:53, Dario Faggioli wrote:
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
Oh, I didn't realize this one was going to have stuff from the previous
series as well. :-)
Acked-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> ---
> xen/common/sched_credit2.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 178a665..9213d98 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -1917,7 +1917,7 @@ csched2_dump(const struct scheduler *ops)
>
> sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
>
> - printk("\tDomain: %d w %d v %d\n\t",
> + printk("\tDomain: %d w %d v %d\n",
> sdom->dom->domain_id,
> sdom->weight,
> sdom->nr_vcpus);
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump
2015-10-08 15:42 ` George Dunlap
@ 2015-10-08 15:59 ` Dario Faggioli
0 siblings, 0 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 15:59 UTC (permalink / raw)
To: George Dunlap, xen-devel; +Cc: Juergen Gross, George Dunlap
[-- Attachment #1.1: Type: text/plain, Size: 833 bytes --]
On Thu, 2015-10-08 at 16:42 +0100, George Dunlap wrote:
> On 08/10/15 13:53, Dario Faggioli wrote:
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> > Reviewed-by: Juergen Gross <jgross@suse.com>
>
> Oh, I didn't realize this one was going to have stuff from the
> previous
> series as well. :-)
>
Just this patch. Since I was doing a new series, this patch seemed more
appropriate here than there.
I should have made it more clear in the cover, sorry.
> Acked-by: George Dunlap <george.dunlap@citrix.com>
>
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW()
2015-10-08 12:52 [PATCH 0/7] xen: sched: fix locking of {insert, remove}_vcpu() Dario Faggioli
` (5 preceding siblings ...)
2015-10-08 12:53 ` [PATCH 6/7] xen: sched: fix an 'off by one \t' in credit2 debug dump Dario Faggioli
@ 2015-10-08 12:53 ` Dario Faggioli
2015-10-08 13:12 ` Andrew Cooper
2015-10-09 5:09 ` Juergen Gross
6 siblings, 2 replies; 29+ messages in thread
From: Dario Faggioli @ 2015-10-08 12:53 UTC (permalink / raw)
To: xen-devel; +Cc: Juergen Gross, George Dunlap
rather than its hexadecimal representation. This makes
it easier to compare the actual system time with other
times being printed out (e.g., deadlines in RTDS).
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
---
Cc: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
xen/common/cpupool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
index 69b984c..e79850b 100644
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -744,7 +744,7 @@ void dump_runq(unsigned char key)
printk("sched_smt_power_savings: %s\n",
sched_smt_power_savings? "enabled":"disabled");
- printk("NOW=0x%08X%08X\n", (u32)(now>>32), (u32)now);
+ printk("NOW=%"PRI_stime"\n", now);
print_cpumap("Online Cpus", &cpu_online_map);
if ( !cpumask_empty(&cpupool_free_cpus) )
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW()
2015-10-08 12:53 ` [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW() Dario Faggioli
@ 2015-10-08 13:12 ` Andrew Cooper
2015-10-08 15:37 ` Jan Beulich
2015-10-09 5:09 ` Juergen Gross
1 sibling, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2015-10-08 13:12 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: Juergen Gross, George Dunlap
On 08/10/15 13:53, Dario Faggioli wrote:
> rather than its hexadecimal representation. This makes
> it easier to compare the actual system time with other
> times being printed out (e.g., deadlines in RTDS).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
There are a number of other areas of code which do the same (which I
noticed when doing the keyhandler refactor). I was going to find and
fix them all in some free time.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW()
2015-10-08 13:12 ` Andrew Cooper
@ 2015-10-08 15:37 ` Jan Beulich
0 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2015-10-08 15:37 UTC (permalink / raw)
To: Andrew Cooper, Dario Faggioli; +Cc: George Dunlap, xen-devel, Juergen Gross
>>> On 08.10.15 at 15:12, <andrew.cooper3@citrix.com> wrote:
> On 08/10/15 13:53, Dario Faggioli wrote:
>> rather than its hexadecimal representation. This makes
>> it easier to compare the actual system time with other
>> times being printed out (e.g., deadlines in RTDS).
>>
>> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> There are a number of other areas of code which do the same (which I
> noticed when doing the keyhandler refactor). I was going to find and
> fix them all in some free time.
I think in a number of those cases printing NOW() is actually
meaningless and would better be deleted than fixed.
Jan
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW()
2015-10-08 12:53 ` [PATCH 7/7] xen: sched / cpupool: dump the actual value of NOW() Dario Faggioli
2015-10-08 13:12 ` Andrew Cooper
@ 2015-10-09 5:09 ` Juergen Gross
1 sibling, 0 replies; 29+ messages in thread
From: Juergen Gross @ 2015-10-09 5:09 UTC (permalink / raw)
To: Dario Faggioli, xen-devel; +Cc: George Dunlap
On 10/08/2015 02:53 PM, Dario Faggioli wrote:
> rather than its hexadecimal representation. This makes
> it easier to compare the actual system time with other
> times being printed out (e.g., deadlines in RTDS).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Juergen Gross <jgross@suse.com>
> ---
> Cc: Juergen Gross <jgross@suse.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> ---
> xen/common/cpupool.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index 69b984c..e79850b 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -744,7 +744,7 @@ void dump_runq(unsigned char key)
>
> printk("sched_smt_power_savings: %s\n",
> sched_smt_power_savings? "enabled":"disabled");
> - printk("NOW=0x%08X%08X\n", (u32)(now>>32), (u32)now);
> + printk("NOW=%"PRI_stime"\n", now);
>
> print_cpumap("Online Cpus", &cpu_online_map);
> if ( !cpumask_empty(&cpupool_free_cpus) )
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
^ permalink raw reply [flat|nested] 29+ messages in thread