From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: [PATCH] sched: fix race between sched_move_domain() and vcpu_wake() Date: Fri, 11 Oct 2013 09:07:43 +0100 Message-ID: References: <5257C0FE02000078000FA711@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VUXlP-0003c8-08 for xen-devel@lists.xenproject.org; Fri, 11 Oct 2013 08:07:59 +0000 Received: by mail-pb0-f42.google.com with SMTP id un15so3831865pbc.15 for ; Fri, 11 Oct 2013 01:07:55 -0700 (PDT) In-Reply-To: <5257C0FE02000078000FA711@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich , Andrew Cooper , David Vrabel Cc: George Dunlap , xen-devel , Juergen Gross List-Id: xen-devel@lists.xenproject.org On 11/10/2013 08:12, "Jan Beulich" wrote: >>>> On 10.10.13 at 20:27, Keir Fraser wrote: >> On 10/10/2013 19:01, "Andrew Cooper" wrote: >> >>>> Just taking the lock for the old processor seemed sufficient to me as >>>> anything seeing the new value would lock and unlock using the same new >>>> value. But do we need to take the schedule_lock for the new processor >>>> as well (in the right order of course)? >>> >>> David and I have been discussing this for a while, involving a >>> whiteboard, and not come to a firm conclusion either way. >>> >>> From my point of view, holding the appropriate vcpu schedule lock >>> entitles you to play with vcpu scheduling state, which involves >>> following v->sched_priv which we update outside the critical region later. >>> >>> Only taking the one lock still leaves a race condition where another cpu >>> can follow the new v->processor and obtain the schedule lock, at which >>> point we have two threads both working on the internals of a vcpu. The >>> change below certainly will fix the current bug of locking one spinlock >>> and unlocking another. >>> >>> My gut feeling is that we do need to take both locks to be safe in terms >>> of data access, but we would appreciate advice from someone more >>> familiar with the scheduler locking. >> >> If it's that tricky to work out, why not just take the two locks, >> appropriately ordered? This isn't a hot path. > > Shouldn't we rather fix the locking mechanism itself, making > vcpu_schedule_lock...() return the lock, such that the unlock > will unavoidably use the correct lock? > > That would at once allow dropping vcpu_schedule_unlock...() > altogether, which would be a good thing even if only considering > the explicit uses of local_irq_disable() there (instead of using the > right spin lock primitives). And if done that way, replacing the > explicit uses of local_irq_enable() in the locking paths would also > seem rather desirable - after all this defeats the spin lock > primitives wanting to re-enable interrupts while waiting for a > lock. It feels to me like this is separate from Andrew's concern. Also I think that holding the schedule_lock should protect you from changes to v->processor. But if that's really unreasonable (e.g., inefficient) then your suggestion here is perfectly sensible. Improving the vcpu_schedule_lock_irq implementations to use the providied underlying spin_lock_irq functions would also be nice, I guess :) -- Keir > Jan >