From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: [PATCH 1/7] xen: sched: fix locking of remove_vcpu() in credit1 Date: Thu, 08 Oct 2015 14:52:36 +0200 Message-ID: <20151008125236.12522.7048.stgit@Solace.station> References: <20151008124027.12522.42552.stgit@Solace.station> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZkAga-0001ME-8g for xen-devel@lists.xenproject.org; Thu, 08 Oct 2015 12:52:40 +0000 Received: by wicge5 with SMTP id ge5so24372530wic.0 for ; Thu, 08 Oct 2015 05:52:38 -0700 (PDT) In-Reply-To: <20151008124027.12522.42552.stgit@Solace.station> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xenproject.org Cc: George Dunlap , Andrew Cooper , Jan Beulich List-Id: xen-devel@lists.xenproject.org In fact, csched_vcpu_remove() (i.e., the credit1 implementation of remove_vcpu()) manipulates runqueues, so holding the runqueue lock is necessary. Also, while there, *_lock_irq() (for the private lock) is enough, there is no need to *_lock_irqsave(). Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Andrew Cooper Cc: Jan Beulich --- Changes from v1 (within the other series): * split the patch (wrt the original patch, in the original series), and take care, in this one, only of remove_vcpu(); * removed pointless parentheses. --- xen/common/sched_credit.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index b8f28fe..6f71e0d 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -926,7 +926,7 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) struct csched_private *prv = CSCHED_PRIV(ops); struct csched_vcpu * const svc = CSCHED_VCPU(vc); struct csched_dom * const sdom = svc->sdom; - unsigned long flags; + spinlock_t *lock; SCHED_STAT_CRANK(vcpu_remove); @@ -936,15 +936,19 @@ csched_vcpu_remove(const struct scheduler *ops, struct vcpu *vc) vcpu_unpause(svc->vcpu); } + lock = vcpu_schedule_lock_irq(vc); + if ( __vcpu_on_runq(svc) ) __runq_remove(svc); - spin_lock_irqsave(&(prv->lock), flags); + vcpu_schedule_unlock_irq(lock, vc); + + spin_lock_irq(&prv->lock); if ( !list_empty(&svc->active_vcpu_elem) ) __csched_vcpu_acct_stop_locked(prv, svc); - spin_unlock_irqrestore(&(prv->lock), flags); + spin_unlock_irq(&prv->lock); BUG_ON( sdom == NULL ); BUG_ON( !list_empty(&svc->runq_elem) );