From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Vrabel Subject: Re: [Xen-devel] [PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen Date: Wed, 8 Apr 2015 13:01:36 +0100 Message-ID: <552518A0.10205@citrix.com> References: <1428375350-9213-1-git-send-email-Waiman.Long@hp.com> <1428375350-9213-13-git-send-email-Waiman.Long@hp.com> Mime-Version: 1.0 Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1428375350-9213-13-git-send-email-Waiman.Long@hp.com> Sender: linux-kernel-owner@vger.kernel.org To: Waiman Long , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra Cc: linux-arch@vger.kernel.org, Rik van Riel , Raghavendra K T , kvm@vger.kernel.org, Daniel J Blueman , x86@kernel.org, Paolo Bonzini , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Scott J Norton , David Vrabel , Oleg Nesterov , xen-devel@lists.xenproject.org, Boris Ostrovsky , "Paul E. McKenney" , Linus Torvalds , Douglas Hatch List-Id: linux-arch.vger.kernel.org On 07/04/15 03:55, Waiman Long wrote: > This patch adds the necessary Xen specific code to allow Xen to > support the CPU halting and kicking operations needed by the queue > spinlock PV code. This basically looks the same as the version I wrote, except I think you broke it. > +static void xen_qlock_wait(u8 *byte, u8 val) > +{ > + int irq = __this_cpu_read(lock_kicker_irq); > + > + /* If kicker interrupts not initialized yet, just spin */ > + if (irq == -1) > + return; > + > + /* clear pending */ > + xen_clear_irq_pending(irq); > + > + /* > + * We check the byte value after clearing pending IRQ to make sure > + * that we won't miss a wakeup event because of the clearing. My version had a barrier() here to ensure this. The documentation of READ_ONCE() suggests that it is not sufficient to meet this requirement (and a READ_ONCE() here is not required anyway). > + * > + * The sync_clear_bit() call in xen_clear_irq_pending() is atomic. > + * So it is effectively a memory barrier for x86. > + */ > + if (READ_ONCE(*byte) != val) > + return; > + > + /* > + * If an interrupt happens here, it will leave the wakeup irq > + * pending, which will cause xen_poll_irq() to return > + * immediately. > + */ > + > + /* Block until irq becomes pending (or perhaps a spurious wakeup) */ > + xen_poll_irq(irq); > +} David From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp02.citrix.com ([66.165.176.63]:55558 "EHLO SMTP02.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753415AbbDHMCP (ORCPT ); Wed, 8 Apr 2015 08:02:15 -0400 Message-ID: <552518A0.10205@citrix.com> Date: Wed, 8 Apr 2015 13:01:36 +0100 From: David Vrabel MIME-Version: 1.0 Subject: Re: [Xen-devel] [PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen References: <1428375350-9213-1-git-send-email-Waiman.Long@hp.com> <1428375350-9213-13-git-send-email-Waiman.Long@hp.com> In-Reply-To: <1428375350-9213-13-git-send-email-Waiman.Long@hp.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Sender: linux-arch-owner@vger.kernel.org List-ID: To: Waiman Long , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra Cc: linux-arch@vger.kernel.org, Rik van Riel , Raghavendra K T , kvm@vger.kernel.org, Daniel J Blueman , x86@kernel.org, Paolo Bonzini , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Scott J Norton , David Vrabel , Oleg Nesterov , xen-devel@lists.xenproject.org, Boris Ostrovsky , "Paul E. McKenney" , Linus Torvalds , Douglas Hatch Message-ID: <20150408120136.AqOHclDV_D8WGQEztsAhGWP8TtoDzqmxiJqM4Iq47Ho@z> On 07/04/15 03:55, Waiman Long wrote: > This patch adds the necessary Xen specific code to allow Xen to > support the CPU halting and kicking operations needed by the queue > spinlock PV code. This basically looks the same as the version I wrote, except I think you broke it. > +static void xen_qlock_wait(u8 *byte, u8 val) > +{ > + int irq = __this_cpu_read(lock_kicker_irq); > + > + /* If kicker interrupts not initialized yet, just spin */ > + if (irq == -1) > + return; > + > + /* clear pending */ > + xen_clear_irq_pending(irq); > + > + /* > + * We check the byte value after clearing pending IRQ to make sure > + * that we won't miss a wakeup event because of the clearing. My version had a barrier() here to ensure this. The documentation of READ_ONCE() suggests that it is not sufficient to meet this requirement (and a READ_ONCE() here is not required anyway). > + * > + * The sync_clear_bit() call in xen_clear_irq_pending() is atomic. > + * So it is effectively a memory barrier for x86. > + */ > + if (READ_ONCE(*byte) != val) > + return; > + > + /* > + * If an interrupt happens here, it will leave the wakeup irq > + * pending, which will cause xen_poll_irq() to return > + * immediately. > + */ > + > + /* Block until irq becomes pending (or perhaps a spurious wakeup) */ > + xen_poll_irq(irq); > +} David