From mboxrd@z Thu Jan 1 00:00:00 1970 From: Raghavendra K T Subject: Re: [Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions Date: Tue, 17 Feb 2015 15:33:23 +0530 Message-ID: <54E311EB.6070602@linux.vnet.ibm.com> References: <1423979744-18320-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com> <20150215173043.GA7471@linux.vnet.ibm.com> <54E21F10.1040402@cantab.net> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, peterz@infradead.org, torvalds@linux-foundation.org, konrad.wilk@oracle.com, pbonzini@redhat.com, waiman.long@hp.com, jeremy@goop.org, ak@linux.intel.com, a.ryabinin@samsung.com, kvm@vger.kernel.org, borntraeger@de.ibm.com, jasowang@redhat.com, x86@kernel.org, oleg@redhat.com, linux-kernel@vger.kernel.org, paul.gortmaker@windriver.com, dave@stgolabs.net, xen-devel@lists.xenproject.org, davej@redhat.com, akpm@linux-foundation.org, paulmck@linux.vnet.ibm.com, virtualization@lists.linux-foundation.org, sasha.levin@oracle.com To: David Vrabel Return-path: In-Reply-To: <54E21F10.1040402@cantab.net> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 02/16/2015 10:17 PM, David Vrabel wrote: > On 15/02/15 17:30, Raghavendra K T wrote: >> --- a/arch/x86/xen/spinlock.c >> +++ b/arch/x86/xen/spinlock.c >> @@ -41,7 +41,7 @@ static u8 zero_stats; >> static inline void check_zero(void) >> { >> u8 ret; >> - u8 old = ACCESS_ONCE(zero_stats); >> + u8 old = READ_ONCE(zero_stats); >> if (unlikely(old)) { >> ret = cmpxchg(&zero_stats, old, 0); >> /* This ensures only one fellow resets the stat */ >> @@ -112,6 +112,7 @@ __visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) >> struct xen_lock_waiting *w = this_cpu_ptr(&lock_waiting); >> int cpu = smp_processor_id(); >> u64 start; >> + __ticket_t head; >> unsigned long flags; >> >> /* If kicker interrupts not initialized yet, just spin */ >> @@ -159,11 +160,15 @@ __visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) >> */ >> __ticket_enter_slowpath(lock); >> >> + /* make sure enter_slowpath, which is atomic does not cross the read */ >> + smp_mb__after_atomic(); >> + >> /* >> * check again make sure it didn't become free while >> * we weren't looking >> */ >> - if (ACCESS_ONCE(lock->tickets.head) == want) { >> + head = READ_ONCE(lock->tickets.head); >> + if (__tickets_equal(head, want)) { >> add_stats(TAKEN_SLOW_PICKUP, 1); >> goto out; >> } >> @@ -204,8 +209,8 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next) >> const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu); >> >> /* Make sure we read lock before want */ >> - if (ACCESS_ONCE(w->lock) == lock && >> - ACCESS_ONCE(w->want) == next) { >> + if (READ_ONCE(w->lock) == lock && >> + READ_ONCE(w->want) == next) { >> add_stats(RELEASED_SLOW_KICKED, 1); >> xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); >> break; > > Acked-by: David Vrabel > > Although some of the ACCESS_ONCE to READ_ONCE changes are cosmetic and > are perhaps best left out of a patch destined for stable. > Thanks. Yes, will send out a separate patch for -stable without READ_ONCE changes once this patches goes in.